Skip to main content

Tracing Quickstart

This quickstart guide will walk you through setting up a simple GenAI application with MLflow Tracing. In less than 10 minutes, you'll enable tracing, run a basic application, and explore the generated traces in the MLflow UI.

Prerequisites

Make sure you have started the MLflow server. If you don't have the MLflow server running yet, just follow these simple steps to get it started.

Install the Python package manager uv (that will also install uvx command to invoke Python tools without installing them).

Start a MLflow server locally.

shell
uvx mlflow server

Create a MLflow Experiment

The traces your GenAI application will send to the MLflow server are grouped into MLflow experiments. We recommend creating one experiment for each GenAI application.

Let's create a new MLflow experiment using the MLflow UI so that you can start sending your traces.

New Experiment
  1. Navigate to the MLflow UI in your browser at http://localhost:5000.
  2. Click on the
    Create
    button on the top right.
  3. Enter a name for the experiment and click on "Create".

You can leave the Artifact Location field blank for now. It is an advanced configuration to override where MLflow stores experiment data.

Dependency

To connect your GenAI application to the MLflow server, you will need to install the MLflow client SDK.

bash
pip install --upgrade mlflow openai>=1.0.0
info

While this guide features an example using the OpenAI SDK, the same steps apply to other LLM providers, including Anthropic, Google, Bedrock, and many others.

For a comprehensive list of LLM providers supported by MLflow, see the LLM Integrations Overview.

Start Tracing

Once your experiment is created, you're ready to connect to the MLflow server and begin sending traces from your GenAI application.

python
import mlflow
from openai import OpenAI

# Specify the tracking URI for the MLflow server.
mlflow.set_tracking_uri("http://localhost:5000")

# Specify the experiment you just created for your GenAI application.
mlflow.set_experiment("My Application")

# Enable automatic tracing for all OpenAI API calls.
mlflow.openai.autolog()

client = OpenAI()
# The trace of the following is sent to the MLflow server.
client.chat.completions.create(
model="o4-mini",
messages=[
{"role": "system", "content": "You are a helpful weather assistant."},
{"role": "user", "content": "What's the weather like in Seattle?"},
],
)

View Your Traces on the MLflow UI

After running the code above, go to the MLflow UI and select the "My Application" experiment, and then select the "Traces" tab. It should show the newly created trace.

Single Trace
Single Trace

Track Multi-Turn Conversations with Sessions

Many GenAI applications maintain multi-turn conversations with users. MLflow provides built-in support for tracking user sessions by using standard metadata fields. This allows you to group related traces together and analyze conversation flows.

Here's how to add user and session tracking to your application:

python
import mlflow


@mlflow.trace
def chat_completion(message: list[dict], user_id: str, session_id: str):
"""Process a chat message with user and session tracking."""

# Add user and session context to the current trace
mlflow.update_current_trace(
metadata={
"mlflow.trace.user": user_id, # Links trace to specific user
"mlflow.trace.session": session_id, # Groups trace with conversation
}
)

# Your chat logic here
return generate_response(message)

For more details on tracking users and sessions, see the Track Users & Sessions guide.

Next Step

Congrats on sending your first trace with MLflow! Now that you've got the basics working, here is the recommended next step to deepen your understanding of tracing: