Tracing OpenAI

MLflow Tracing provides automatic tracing capability for OpenAI. By enabling auto tracing
for OpenAI by calling the mlflow.openai.autolog()
function, MLflow will capture traces for LLM invocation and log them to the active MLflow Experiment. In Typescript, you can instead use the tracedOpenAI
function to wrap the OpenAI client.
- Python
- JS / TS
import mlflow
mlflow.openai.autolog()
import { OpenAI } from "openai";
import { tracedOpenAI } from "mlflow-openai";
const client = tracedOpenAI(new OpenAI());
MLflow trace automatically captures the following information about OpenAI calls:
- Prompts and completion responses
- Latencies
- Model name
- Additional metadata such as
temperature
,max_completion_tokens
, if specified. - Function calling if returned in the response
- Built-in tools such as web search, file search, computer use, etc.
- Any exception if raised
MLflow OpenAI integration is not only about tracing. MLflow offers full tracking experience for OpenAI, including model tracking, prompt management, and evaluation. Please checkout the MLflow OpenAI Flavor to learn more!
Supported APIs
MLflow supports automatic tracing for the following OpenAI APIs. To request support for additional APIs, please open a feature request on GitHub.
Chat Completion API
Normal | Function Calling | Structured Outputs | Streaming | Async | Image | Audio |
---|---|---|---|---|---|---|
✅ | ✅ | ✅(>=2.21.0) | ✅ (>=2.15.0) |