Skip to main content

Tracing Kong AI Gateway

Kong AI Gateway is an enterprise-grade API gateway that provides a unified OpenAI-compatible API to access multiple LLM providers including OpenAI, Anthropic, Azure, AWS Bedrock, Google Gemini, and more. It offers built-in rate limiting, caching, load balancing, and observability.
Kong AI Gateway Tracing

Since Kong AI Gateway exposes an OpenAI-compatible API, you can use MLflow's OpenAI autolog integration to automatically trace all your LLM calls through the gateway.

Getting Started

Prerequisites
Set up Kong AI Gateway by following the installation guide and configure your LLM provider credentials.
1

Install Dependencies

bash
pip install mlflow openai
2

Start MLflow Server

If you have a local Python environment >= 3.10, you can start the MLflow server locally using the mlflow CLI command.

bash
mlflow server
3

Enable Tracing and Make API Calls

Enable tracing with mlflow.openai.autolog() and configure the OpenAI client to use Kong AI Gateway's base URL.

python
import mlflow
from openai import OpenAI

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Set tracking URI and experiment
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("Kong AI Gateway")

# Create OpenAI client pointing to Kong AI Gateway
client = OpenAI(
base_url="http://<your-kong-gateway>:8000/v1",
api_key="<YOUR_API_KEY>",
)

# Make API calls - traces will be captured automatically
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
],
)
print(response.choices[0].message.content)
4

View Traces in MLflow UI

Open the MLflow UI at http://localhost:5000 to see the traces from your Kong AI Gateway API calls.

Combining with Manual Tracing

You can combine auto-tracing with MLflow's manual tracing to create comprehensive traces that include your application logic:

python
import mlflow
from mlflow.entities import SpanType
from openai import OpenAI

mlflow.openai.autolog()

client = OpenAI(
base_url="http://<your-kong-gateway>:8000/v1",
api_key="<YOUR_API_KEY>",
)


@mlflow.trace(span_type=SpanType.CHAIN)
def ask_question(question: str) -> str:
"""A traced function that calls the LLM through Kong AI Gateway."""
response = client.chat.completions.create(
model="gpt-4o", messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content


# The entire function call and nested LLM call will be traced
answer = ask_question("What is machine learning?")
print(answer)

Streaming Support

MLflow supports tracing streaming responses from Kong AI Gateway:

python
import mlflow
from openai import OpenAI

mlflow.openai.autolog()

client = OpenAI(
base_url="http://<your-kong-gateway>:8000/v1",
api_key="<YOUR_API_KEY>",
)

stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku about machine learning."}],
stream=True,
)

for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

MLflow will automatically capture the complete streamed response in the trace.

Next Steps