Skip to main content

Tracing Vercel AI Gateway

Vercel AI Gateway provides a unified API to access hundreds of LLMs through a single endpoint. Key features include high reliability with automatic fallbacks to other providers, spend monitoring across providers, and zero markup on token costs. It works seamlessly with the OpenAI SDK, Anthropic SDK, and Vercel AI SDK.
Vercel AI Gateway Tracing

Since Vercel AI Gateway exposes an OpenAI-compatible API, you can use MLflow's OpenAI autolog integration to automatically trace all your LLM calls through the gateway.

Getting Started

Prerequisites
Create a Vercel account and enable AI Gateway for your project. You can find your API key in the project settings.
1

Install Dependencies

bash
pip install mlflow openai
2

Start MLflow Server

If you have a local Python environment >= 3.10, you can start the MLflow server locally using the mlflow CLI command.

bash
mlflow server
3

Enable Tracing and Make API Calls

Enable tracing with mlflow.openai.autolog() and configure the OpenAI client to use Vercel AI Gateway's base URL.

python
import mlflow
from openai import OpenAI

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Set tracking URI and experiment
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("Vercel AI Gateway")

# Create OpenAI client pointing to Vercel AI Gateway
client = OpenAI(
base_url="https://ai-gateway.vercel.sh/v1",
api_key="<YOUR_VERCEL_AI_GATEWAY_API_KEY>",
)

# Make API calls - traces will be captured automatically
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4.5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
)
print(response.choices[0].message.content)
4

View Traces in MLflow UI

Open the MLflow UI at http://localhost:5000 to see the traces from your Vercel AI Gateway API calls.

Combining with Manual Tracing

You can combine auto-tracing with MLflow's manual tracing to create comprehensive traces that include your application logic:

python
import mlflow
from mlflow.entities import SpanType
from openai import OpenAI

mlflow.openai.autolog()

client = OpenAI(
base_url="https://ai-gateway.vercel.sh/v1",
api_key="<YOUR_VERCEL_AI_GATEWAY_API_KEY>",
)


@mlflow.trace(span_type=SpanType.CHAIN)
def ask_question(question: str) -> str:
"""A traced function that calls the LLM through Vercel AI Gateway."""
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4.5", messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content


# The entire function call and nested LLM call will be traced
answer = ask_question("What is machine learning?")
print(answer)

Streaming Support

MLflow supports tracing streaming responses from Vercel AI Gateway:

python
import mlflow
from openai import OpenAI

mlflow.openai.autolog()

client = OpenAI(
base_url="https://ai-gateway.vercel.sh/v1",
api_key="<YOUR_VERCEL_AI_GATEWAY_API_KEY>",
)

stream = client.chat.completions.create(
model="anthropic/claude-sonnet-4.5",
messages=[{"role": "user", "content": "Write a haiku about machine learning."}],
stream=True,
)

for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

MLflow will automatically capture the complete streamed response in the trace.

Next Steps