Seamlessly integrate MLflow Tracing with OpenAI to automatically trace your OpenAI API calls.
| Package | NPM | Description |
|---|---|---|
| mlflow-openai | Auto-instrumentation integration for OpenAI. |
npm install mlflow-openai
The package includes the mlflow-tracing package and openai package as peer dependencies. Depending on your package manager, you may need to install these two packages separately.
Start MLflow Tracking Server. If you have a local Python environment, you can run the following command:
pip install mlflow
mlflow server --backend-store-uri sqlite:///mlruns.db --port 5000
If you don't have Python environment locally, MLflow also supports Docker deployment or managed services. See Self-Hosting Guide for getting started.
Instantiate MLflow SDK in your application:
import * as mlflow from 'mlflow-tracing';
mlflow.init({
trackingUri: 'http://localhost:5000',
experimentId: '<experiment-id>'
});
Create a trace:
import { OpenAI } from 'openai';
import { tracedOpenAI } from 'mlflow-openai';
// Wrap the OpenAI client with the tracedOpenAI function
const client = tracedOpenAI(new OpenAI());
// Invoke the client as usual
const response = await client.chat.completions.create({
model: 'o4-mini',
messages: [
{ role: 'system', content: 'You are a helpful weather assistant.' },
{ role: 'user', content: "What's the weather like in Seattle?" }
]
});
View traces in MLflow UI:

Official documentation for MLflow Typescript SDK can be found here.
This project is licensed under the Apache License 2.0.