Skip to main content

MLflow 2.21.1

· 2 min read
MLflow maintainers
MLflow maintainers

MLflow 2.21.1 is a patch release that introduces minor features and addresses some minor bugs.

Features:

  • Introduce support for logging evaluations within DSPy (#14962, @TomeHirata)
  • Add support for run creation when DSPy compile is executed (#14949, @TomeHirata)
  • Add support for building a SageMaker serving container that does not contain Java via the --install-java option (#14868, @rgangopadhya)

Bug fixes:

  • Fix an issue with trace ordering due to a timestamp conversion timezone bug (#15094, @orm011)
  • Fix a typo in the environment variable OTEL_EXPORTER_OTLP_PROTOCOL definition (#15008, @gabrielfu)
  • Fix an issue in shared and serverless clusters on Databricks when logging Spark Datasources when using the evaluate API (#15077, @WeichenXu123)
  • Fix a rendering issue with displaying images from within the metric tab in the UI (#15034, @TomeHirata)

Documentation updates:

  • Add additional contextual information within the set_retriever_schema API docs (#15099, @smurching)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 2.21.0

· 4 min read
MLflow maintainers
MLflow maintainers

We are excited to announce the release of MLflow 2.21.0! This release includes a number of significant features, enhancements, and bug fixes.

Major New Features

Features:

Bug fixes:

  • [Models] Fix infinite recursion error with warning handler module (#14954, @BenWilson2)
  • [Model Registry] Fix invalid type issue for ModelRegistry RestStore (#14980, @B-Step62)
  • [Tracking] Fix: ExperimentViewRunsControlsActionsSelectTags doesn't set loading state to false when set-tag request fails. (#14907, @harupy)
  • [Tracking] Fix a bug in tag creation where tag values containing ": " get truncated (#14896, @harupy)
  • [Tracking] Fix false alert from AMD GPU monitor (#14884, @B-Step62)
  • [Tracking] Fix mlflow.doctor to fall back to mlflow-skinny when mlflow is not found (#14782, @harupy)
  • [Models] Handle LangGraph breaking change (#14794, @B-Step62)

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 2.20.3

· One min read
MLflow maintainers
MLflow maintainers

MLflow 2.20.3 is a patch release includes several major features and improvements

Features:

Bug fixes:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 2.20.2

· One min read
MLflow maintainers
MLflow maintainers

MLflow 2.20.2 is a patch release includes several bug fixes and features

Features:

Bug fixes:

Documentation updates:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 2.20.1

· One min read
MLflow maintainers
MLflow maintainers

MLflow 2.20.1 is a patch release includes several bug fixes and features:

Features:

  • Spark_udf support for the model signatures based on type hints (#14265, @serena-ruan)
  • Helper connectors to use ChatAgent with LangChain and LangGraph (#14215, @bbqiu)
  • Update classifier evaluator to draw RUC/Lift curves for CatBoost models by default (#14333, @singh-kristian)

Bug fixes:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 2.20.0

· 3 min read
MLflow maintainers
MLflow maintainers

Major New Features

  • 💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel's predict function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)

  • 🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing or mlflow.groq.tracing call to the code. (#14018, @B-Step62, #14006, @anumita0203)

  • 🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)

  • ⚡️Faster Model Validation with uv Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)

  • 🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)

Other Features:

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.

MLflow 2.20.0rc0

· 3 min read
MLflow maintainers
MLflow maintainers

MLflow 2.20.0rc0 is a release candidate for 2.20.0. To install, run the following command:

pip install mlflow==2.20.0rc0

Major New Features

  • 💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel's predict function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)

  • 🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing or mlflow.groq.tracing call to the code. (#14018, @B-Step62, #14006, @anumita0203)

  • 🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)

  • ⚡️Faster Model Validation with uv Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)

  • 🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)

Other Features:

Please try it out and report any issues on the issue tracker!

MLflow 2.19.0

· 4 min read
MLflow maintainers
MLflow maintainers

2.19.0 (2024-12-11)

We are excited to announce the release of MLflow 2.19.0! This release includes a number of significant features, enhancements, and bug fixes.

Major New Features

  • ChatModel enhancements - ChatModel now adopts ChatCompletionRequest and ChatCompletionResponse as its new schema. The predict_stream interface uses ChatCompletionChunk to deliver true streaming responses. Additionally, the custom_inputs and custom_outputs fields in ChatModel now utilize AnyType, enabling support for a wider variety of data types. Note: In a future version of MLflow, ChatParams (and by extension, ChatCompletionRequest) will have the default values for n, temperature, and stream removed. (#13782, #13857, @stevenchen-db)

  • Tracing improvements - MLflow Tracing now supports both automatic and manual tracing for DSPy, LlamaIndex and Langchain flavors. Tracing feature is also auto-enabled for mlflow evaluation for all supported flavors. (#13790, #13793, #13795, #13897, @B-Step62)

  • New Tracing Integrations - MLflow Tracing now supports CrewAI and Anthropic, enabling a one-line, fully automated tracing experience. (#13903, @TomeHirata, #13851, @gabrielfu)

  • Any Type in model signature - MLflow now supports AnyType in model signature. It can be used to host any data types that were not supported before. (#13766, @serena-ruan)

Other Features:

  • [Tracking] Add update_current_trace API for adding tags to an active trace. (#13828, @B-Step62)
  • [Deployments] Update databricks deployments to support AI gateway & additional update endpoints (#13513, @djliden)
  • [Models] Support uv in mlflow.models.predict (#13824, @serena-ruan)
  • [Models] Add type hints support including pydantic models (#13924, @serena-ruan)
  • [Tracking] Add the trace.search_spans() method for searching spans within traces (#13984, @B-Step62)

Bug fixes:

  • [Tracking] Allow passing in spark connect dataframes in mlflow evaluate API (#13889, @WeichenXu123)
  • [Tracking] Fix mlflow.end_run inside a MLflow run context manager (#13888, @WeichenXu123)
  • [Scoring] Fix spark_udf conditional check on remote spark-connect client or Databricks Serverless (#13827, @WeichenXu123)
  • [Models] Allow changing max_workers for built-in LLM-as-a-Judge metrics (#13858, @B-Step62)
  • [Models] Support saving all langchain runnables using code-based logging (#13821, @serena-ruan)
  • [Model Registry] return empty array when DatabricksSDKModelsArtifactRepository.list_artifacts is called on a file (#14027, @shichengzhou-db)
  • [Tracking] Stringify param values in client.log_batch() (#14015, @B-Step62)
  • [Tracking] Remove deprecated squared parameter (#14028, @B-Step62)
  • [Tracking] Fix request/response field in the search_traces output (#13985, @B-Step62)

Documentation updates:

  • [Docs] Add Ollama and Instructor examples in tracing doc (#13937, @B-Step62)

For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.