Tracing OpenAI
MLflow Tracing provides automatic tracing capability for OpenAI. By enabling auto tracing
for OpenAI by calling the mlflow.openai.autolog()
function, MLflow will capture traces for LLM invocation and log them to the active MLflow Experiment.
import mlflow
mlflow.openai.autolog()
MLflow trace automatically captures the following information about OpenAI calls:
- Prompts and completion responses
- Latencies
- Model name
- Additional metadata such as
temperature
,max_tokens
, if specified. - Function calling if returned in the response
- Built-in tools such as web search, file search, computer use, etc.
- Any exception if raised
tip
MLflow OpenAI integration is not only about tracing. MLflow offers full tracking experience for OpenAI, including model tracking, prompt management, and evaluation. Please checkout the MLflow OpenAI Flavor to learn more!
Supported APIs​
MLflow supports automatic tracing for the following OpenAI APIs. To request support for additional APIs, please open a feature request on GitHub.
Chat Completion API​
Normal | Function Calling | Structured Outputs | Streaming | Async | Image | Audio |
---|---|---|---|---|---|---|
✅ | ✅ | ✅(>=2.21.0) | ✅ (>=2.15.0) | ✅(>=2.21.0) | - | - |
Responses API​
Normal | Function Calling | Structured Outputs | Web Search | File Search | Computer Use | Reasoning | Streaming | Async | Image |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ |