Skip to main content

AI Endpoints

The AI endpoints accept unstructured input and return structured geospatial results. Send natural language or messy address data, get back clean coordinates, routes, and corrected addresses.

How it works

AI endpoints use large language models to understand your input, then call the appropriate routing or geocoding backend. You get the same structured responses as the standard endpoints, plus a natural language summary and metadata about what was inferred.

Endpoints

EndpointDescription
POST /queryNatural language to structured geospatial results. Send text like “Walking route from Kings Cross to Tower Bridge” and get back a route.
POST /parse-addressCorrect, standardize, and structure messy addresses. Fixes typos, expands abbreviations, infers missing fields.

Route narratives

The standard route and optimise endpoints also support AI-generated summaries. Pass narrative: true to get a human-readable description of the route:
{
  "locations": [...],
  "narrative": true
}
The response includes a narrative field:
{
  "route": {
    "distance_meters": 5765,
    "duration_seconds": 812,
    "narrative": "A 5.8km drive south through central London, about 14 minutes. Mostly flat with 12m total ascent. No tolls or motorways.",
    "terrain": { ... }
  }
}

Authentication

All AI endpoints require an API key via the x-api-key header, same as every other endpoint. See the authentication guide for details.

Latency

AI endpoints add latency compared to standard endpoints because they involve LLM processing:
  • /query: 1-3 seconds (intent classification + geocoding + backend call + summary generation)
  • /parse-address: 1-2 seconds for small batches, up to 10 seconds for 100 addresses
  • narrative: true on route endpoints: adds ~500ms to the standard response time
For latency-sensitive applications, use the standard endpoints directly.