Skip to main content

Tracing OpenRouter

OpenRouter is a unified API gateway that provides access to 280+ LLMs from providers like OpenAI, Anthropic, Google, Meta, and many others through a single OpenAI-compatible API. This allows developers to easily switch between models without changing their code.
OpenRouter Tracing

Since OpenRouter exposes an OpenAI-compatible API, you can use MLflow's OpenAI autolog integration to automatically trace all your LLM calls through the gateway.

Getting Started

Prerequisites
Before following the steps below, you need to create an OpenRouter account and generate an API key from the Keys page.
1

Install Dependencies

bash
pip install mlflow openai
2

Start MLflow Server

If you have a local Python environment >= 3.10, you can start the MLflow server locally using the mlflow CLI command.

bash
mlflow server
3

Enable Tracing and Make API Calls

Enable tracing with mlflow.openai.autolog() and configure the OpenAI client to use OpenRouter's base URL.

python
import mlflow
from openai import OpenAI

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Set tracking URI and experiment
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("OpenRouter")

# Create OpenAI client pointing to OpenRouter
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<YOUR_OPENROUTER_API_KEY>",
)

# Make API calls - traces will be captured automatically
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4.5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
)
print(response.choices[0].message.content)
4

View Traces in MLflow UI

Open the MLflow UI at http://localhost:5000 to see the traces from your OpenRouter API calls.

Combining with Manual Tracing

You can combine auto-tracing with MLflow's manual tracing to create comprehensive traces that include your application logic:

python
import mlflow
from mlflow.entities import SpanType
from openai import OpenAI

mlflow.openai.autolog()

client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<YOUR_OPENROUTER_API_KEY>",
)


@mlflow.trace(span_type=SpanType.CHAIN)
def ask_question(question: str) -> str:
"""A traced function that calls the LLM through OpenRouter."""
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4.5", messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content


# The entire function call and nested LLM call will be traced
answer = ask_question("What is machine learning?")
print(answer)

Streaming Support

MLflow supports tracing streaming responses from OpenRouter:

python
import mlflow
from openai import OpenAI

mlflow.openai.autolog()

client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<YOUR_OPENROUTER_API_KEY>",
)

stream = client.chat.completions.create(
model="anthropic/claude-sonnet-4.5",
messages=[{"role": "user", "content": "Write a haiku about machine learning."}],
stream=True,
)

for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

MLflow will automatically capture the complete streamed response in the trace.

Next Steps