MLflow 2.20.4
MLflow 2.20.4 is a tiny patch release to include a bug fix:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.4 is a tiny patch release to include a bug fix:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.3 is a patch release includes several major features and improvements
Features:
Bug fixes:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.2 is a patch release includes several bug fixes and features
Features:
Bug fixes:
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.1 is a patch release includes several bug fixes and features:
Features:
Bug fixes:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel's predict function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)
🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing or mlflow.groq.tracing call to the code. (#14018, @B-Step62, #14006, @anumita0203)
🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)
⚡️Faster Model Validation with uv Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)
🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)
Other Features:
ChatAgent base class for defining custom python agent (#13797, @bbqiu)context parameter optional for calling PythonModel instance (#14059, @serena-ruan)ChatModel (#14068, @stevenchen-db)For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.0rc0 is a release candidate for 2.20.0. To install, run the following command:
pip install mlflow==2.20.0rc0
💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel's predict function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)
🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing or mlflow.groq.tracing call to the code. (#14018, @B-Step62, #14006, @anumita0203)
🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)
⚡️Faster Model Validation with uv Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)
🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)
ChatAgent base class for defining custom python agent (#13797, @bbqiu)context parameter optional for calling PythonModel instance (#14059, @serena-ruan)ChatModel (#14068, @stevenchen-db)Please try it out and report any issues on the issue tracker!
We are excited to announce the release of MLflow 2.19.0! This release includes a number of significant features, enhancements, and bug fixes.
ChatModel enhancements - ChatModel now adopts ChatCompletionRequest and ChatCompletionResponse as its new schema. The predict_stream interface uses ChatCompletionChunk to deliver true streaming responses. Additionally, the custom_inputs and custom_outputs fields in ChatModel now utilize AnyType, enabling support for a wider variety of data types. Note: In a future version of MLflow, ChatParams (and by extension, ChatCompletionRequest) will have the default values for n, temperature, and stream removed. (#13782, #13857, @stevenchen-db)
Tracing improvements - MLflow Tracing now supports both automatic and manual tracing for DSPy, LlamaIndex and Langchain flavors. Tracing feature is also auto-enabled for mlflow evaluation for all supported flavors. (#13790, #13793, #13795, #13897, @B-Step62)
New Tracing Integrations - MLflow Tracing now supports CrewAI and Anthropic, enabling a one-line, fully automated tracing experience. (#13903, @TomeHirata, #13851, @gabrielfu)
Any Type in model signature - MLflow now supports AnyType in model signature. It can be used to host any data types that were not supported before. (#13766, @serena-ruan)
Other Features:
update_current_trace API for adding tags to an active trace. (#13828, @B-Step62)trace.search_spans() method for searching spans within traces (#13984, @B-Step62)Bug fixes:
mlflow.end_run inside a MLflow run context manager (#13888, @WeichenXu123)Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are excited to announce the release of MLflow 2.18.0! This release includes a number of significant features, enhancements, and bug fixes.
Python 3.8 is now at an end-of-life point. With official support being dropped for this legacy version, MLflow now requires Python 3.9 as a minimum supported version.
Note: If you are currently using MLflow's
ChatModelinterface for authoring custom GenAI applications, please ensure that you have read the future breaking changes section below.
🦺 Fluent API Thread/Process Safety - MLflow's fluent APIs for tracking and the model registry have been overhauled to add support for both thread and multi-process safety. You are now no longer forced to use the Client APIs for managing experiments, runs, and logging from within multiprocessing and threaded applications. (#13456, #13419, @WeichenXu123)
🧩 DSPy flavor - MLflow now supports logging, loading, and tracing of DSPy models, broadening the support for advanced GenAI authoring within MLflow. Check out the MLflow DSPy Flavor documentation to get started! (#13131, #13279, #13369, #13345, @chenmoneygithub, #13543, #13800, #13807, @B-Step62, #13289, @michael-berk)
🖥️ Enhanced Trace UI - MLflow Tracing's UI has undergone a significant overhaul to bring usability and quality of life updates to the experience of auditing and investigating the contents of GenAI traces, from enhanced span content rendering using markdown to a standardized span component structure. (#13685, #13357, #13242, @daniellok-db)
🚄 New Tracing Integrations - MLflow Tracing now supports DSPy, LiteLLM, and Google Gemini, enabling a one-line, fully automated tracing experience. These integrations unlock enhanced observability across a broader range of industry tools. Stay tuned for upcoming integrations and updates! (#13801, @TomeHirata, #13585, @B-Step62)
📊 Expanded LLM-as-a-Judge Support - MLflow now enhances its evaluation capabilities with support for additional providers, including Anthropic, Bedrock, Mistral, and TogetherAI, alongside existing providers like OpenAI. Users can now also configure proxy endpoints or self-hosted LLMs that follow the provider API specs by using the new proxy_url and extra_headers options. Visit the LLM-as-a-Judge documentation for more details! (#13715, #13717, @B-Step62)
⏰ Environment Variable Detection - As a helpful reminder for when you are deploying models, MLflow now detects and reminds users of environment variables set during model logging, ensuring they are configured for deployment. In addition to this, the mlflow.models.predict utility has also been updated to include these variables in serving simulations, improving pre-deployment validation. (#13584, @serena-ruan)
ChatModel Interface Updates - As part of a broader unification effort within MLflow and services that rely on or deeply integrate with MLflow's GenAI features, we are working on a phased approach to making a consistent and standard interface for custom GenAI application development and usage. In the first phase (planned for release in the next few releases of MLflow), we are marking several interfaces as deprecated, as they will be changing. These changes will be:
ChatRequest → ChatCompletionRequest to provide disambiguation for future planned request interfaces.ChatResponse → ChatCompletionResponse for the same reason as the input interface.metadata fields within ChatRequest and ChatResponse → custom_inputs and custom_outputs, respectively.predict_stream will be updated to enable true streaming for custom GenAI applications. Currently, it returns a generator with synchronous outputs from predict. In a future release, it will return a generator of ChatCompletionChunks, enabling asynchronous streaming. While the API call structure will remain the same, the returned data payload will change significantly, aligning with LangChain’s implementation.mlflow.models.rag_signatures will be deprecated, merging into unified ChatCompletionRequest, ChatCompletionResponse, and ChatCompletionChunks.Other Features:
Here is the updated section with links to each PR ID and author:
markdown Copy code Other Features:
spark_udf when running on Databricks Serverless runtime, Databricks Connect, and prebuilt Python environments (#13276, #13496, @WeichenXu123)model_config parameter for pyfunc.spark_udf for customization of batch inference payload submission (#13517, @WeichenXu123)Documents (#13242, @daniellok-db)resources definitions for LangChain model logging (#13315, @sunishsheth2009)dependencies for Agent definitions (#13246, @sunishsheth2009)Bug fixes:
gc command when deleting experiments with logged datasets (#13741, @daniellok-db)LangChain's pyfunc predict input conversion (#13652, @serena-ruan)Optional dataclasses that define a model's signature (#13440, @bbqiu)LangChain's autologging thread-safety behavior (#13672, @B-Step62)role and index as required for chat schema (#13279, @chenmoneygithub)LangChain models (#13610, @WeichenXu123)Documentation updates:
model_config when logging models as code (#13631, @sunishsheth2009)code_paths model logging feature (#13702, @TomeHirata)SparkML log_model documentation with guidance on how to return probabilities from classification models (#13684, @WeichenXu123)For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.17.2 includes several major features and improvements
Features:
Bug fixes:
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.17.1 includes several major features and improvements
Features:
Bug fixes:
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.