Skip to main content

MLflow 3 Migration Guide

This guide covers breaking changes and API updates when migrating from MLflow 2.x to MLflow 3.x.

Installation

Install MLflow 3 by running:

bash
pip install "mlflow>=3.1"

Resources: Website | Documentation | Release Notes

Key Changes from MLflow 2.x

Model Logging API Changes

MLflow 2.x:

python
with mlflow.start_run():
mlflow.pyfunc.log_model(
artifact_path="model",
python_model=python_model,
)

MLflow 3:

python
# No longer requires starting a Run before logging models
mlflow.pyfunc.log_model(
name="model", # Use 'name' instead of 'artifact_path'
python_model=python_model,
)
note

Models are now first-class entities in MLflow 3. You can call log_model directly without the mlflow.start_run() context manager. Use the name parameter to enable searching for LoggedModels.

Model Artifacts Storage Location

MLflow 2.x:

shell
experiments/
└── <experiment_id>/
└── <run_id>/
└── artifacts/
└── ... # model artifacts stored here

MLflow 3:

shell
experiments/
└── <experiment_id>/
└── models/
└── <model_id>/
└── artifacts/
└── ... # model artifacts stored here
warning

This change impacts the behavior of mlflow.client.MlflowClient.list_artifacts(). Model artifacts are no longer stored as run artifacts.

UI Changes

Artifacts Tab

In MLflow 3.x, the Artifacts tab in the run page no longer displays model artifacts. Model artifacts are now accessed through the Logged Models page, which provides a dedicated view for model-specific information and artifacts.

Breaking Changes

Removed Features

  • MLflow Recipes: Completely removed (#15250). Migrate to standard MLflow tracking and model registry functionality or consider MLflow Projects.

  • Model Flavors: The following flavors are no longer supported:

    • fastai (#15255) - Use mlflow.pyfunc with custom wrapper
    • mleap (#15259) - Use mlflow.onnx or mlflow.pyfunc
    • diviner - Use mlflow.pyfunc with custom wrapper
    • gluon - Use mlflow.pytorch or mlflow.onnx
  • AI Gateway: The 'routes' and 'route_type' config keys removed (#15331). Use the new configuration format.

  • Deployment Server: The deployment server and start-server CLI command removed (#15327). Use mlflow models serve or containerized deployments.

Tracking API Changes

run_uuid Attribute Removed

Replace run_uuid with run_id:

python
# MLflow 2.x
run_info.run_uuid

# MLflow 3
run_info.run_id

Git Tags Removed

The following run tags have been removed (#15366):

  • mlflow.gitBranchName
  • mlflow.gitRepoURL

TensorFlow Autologging

The every_n_iter parameter removed from TensorFlow autologging (#15412). Implement custom logging callbacks for fine-tuned logging frequency.

Model API Changes

Removed Parameters

The following parameters have been removed from model logging/saving APIs:

  • example_no_conversion (#15322)
  • code_path (#15368) - Use default code directory structure
  • requirements_file from PyTorch flavor (#15369) - Use pip_requirements or extra_pip_requirements
  • inference_config from Transformers flavor (#15415) - Set configuration before logging

ModelInfo Changes

The signature_dict property removed from ModelInfo (#15367). Use the signature property instead.

Evaluation API Changes

Baseline Model Comparison

The baseline_model parameter removed (#15362). Use mlflow.validate_evaluation_results API to compare models:

python
# For classical ML models, use mlflow.models.evaluate
result_1 = mlflow.models.evaluate(model_1, data)
result_2 = mlflow.models.evaluate(model_2, data)

# Compare results
mlflow.validate_evaluation_results(result_1, result_2)
note

For GenAI evaluation, use mlflow.genai.evaluate with the new evaluation framework. See the GenAI Evaluation Migration Guide for details on migrating from the legacy LLM evaluation approach.

MetricThreshold Changes

Use greater_is_better instead of higher_is_better:

python
# MLflow 2.x
threshold = MetricThreshold(higher_is_better=True)

# MLflow 3
threshold = MetricThreshold(greater_is_better=True)

Custom Metrics

The custom_metrics parameter removed (#15361). Use the newer custom metrics approach in the evaluation API.

Explainer Logging

mlflow.evaluate no longer logs an explainer as a model by default. To enable:

python
mlflow.evaluate(
...,
evaluator_config={
"log_model_explainability": True,
"log_explainer": True,
},
)

Environment Variables

MLFLOW_GCS_DEFAULT_TIMEOUT removed (#15365). Configure timeouts using standard GCS client library approaches.

Migration FAQs

Can MLflow 3.x load resources created with MLflow 2.x?

Yes, MLflow 3.x can load resources (runs, models, traces, etc.) created with MLflow 2.x. However, the reverse is not true.

warning

When testing MLflow 3.x, use a separate environment to avoid conflicts with MLflow 2.x.

load_model throws ResourceNotFound error. What's wrong?

In MLflow 3.x, model artifacts are stored in a different location. Use the model URI returned by log_model:

python
# ❌ Don't use mlflow.get_artifact_uri("model")
with mlflow.start_run() as run:
mlflow.sklearn.log_model(my_model, name="model")
mlflow.sklearn.load_model(mlflow.get_artifact_uri("model")) # Fails!

# ✅ Use the model URI from log_model
with mlflow.start_run() as run:
info = mlflow.sklearn.log_model(my_model, name="model")

# Recommended: use model_uri from result
mlflow.sklearn.load_model(info.model_uri)

# Alternative: use model_id
mlflow.sklearn.load_model(f"models:/{info.model_id}")

# Deprecated: use run_id (will be removed in future)
mlflow.sklearn.load_model(f"runs:/{run.info.run_id}/model")

How do I modify model requirements?

Use mlflow.models.update_model_requirements():

python
import mlflow


class DummyModel(mlflow.pyfunc.PythonModel):
def predict(self, context, model_input: list[str]) -> list[str]:
return model_input


model_info = mlflow.pyfunc.log_model(name="model", python_model=DummyModel())
mlflow.models.update_model_requirements(
model_uri=model_info.model_uri,
operation="add",
requirement_list=["scikit-learn"],
)

How do I stay on MLflow 2.x?

Pin MLflow to the latest 2.x version:

bash
pip install 'mlflow<3'

Compatibility

We strongly recommend upgrading both client and server to MLflow 3.x for the best experience. A mismatch between client and server versions may lead to unexpected behavior.

Getting Help

For detailed guidance on migrating specific code, please consult: