MLflow 2.20.3 is a patch release includes several major features and improvements
Features:
Bug fixes:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.2 is a patch release includes several bug fixes and features
Features:
Bug fixes:
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.1 is a patch release includes several bug fixes and features:
Features:
- Spark_udf support for the model signatures based on type hints (#14265, @serena-ruan)
- Helper connectors to use ChatAgent with LangChain and LangGraph (#14215, @bbqiu)
- Update classifier evaluator to draw RUC/Lift curves for CatBoost models by default (#14333, @singh-kristian)
Bug fixes:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
Major New Features
-
💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel
's predict
function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)
-
🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing
or mlflow.groq.tracing
call to the code. (#14018, @B-Step62, #14006, @anumita0203)
-
🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)
-
⚡️Faster Model Validation with uv
Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)
-
🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat
panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)
Other Features:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.20.0rc0 is a release candidate for 2.20.0. To install, run the following command:
pip install mlflow==2.20.0rc0
Major New Features
-
💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your PythonModel
's predict
function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)
-
🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding mlflow.bedrock.tracing
or mlflow.groq.tracing
call to the code. (#14018, @B-Step62, #14006, @anumita0203)
-
🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. (#13955, @daniellok-db)
-
⚡️Faster Model Validation with uv
Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)
-
🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified chat
panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)
Other Features:
Please try it out and report any issues on the issue tracker!
2.19.0 (2024-12-11)
We are excited to announce the release of MLflow 2.19.0! This release includes a number of significant features, enhancements, and bug fixes.
Major New Features
-
ChatModel enhancements - ChatModel now adopts ChatCompletionRequest
and ChatCompletionResponse
as its new schema. The predict_stream
interface uses ChatCompletionChunk
to deliver true streaming responses. Additionally, the custom_inputs
and custom_outputs
fields in ChatModel now utilize AnyType
, enabling support for a wider variety of data types. Note: In a future version of MLflow, ChatParams
(and by extension, ChatCompletionRequest
) will have the default values for n
, temperature
, and stream
removed. (#13782, #13857, @stevenchen-db)
-
Tracing improvements - MLflow Tracing now supports both automatic and manual tracing for DSPy, LlamaIndex and Langchain flavors. Tracing feature is also auto-enabled for mlflow evaluation for all supported flavors. (#13790, #13793, #13795, #13897, @B-Step62)
-
New Tracing Integrations - MLflow Tracing now supports CrewAI and Anthropic, enabling a one-line, fully automated tracing experience. (#13903, @TomeHirata, #13851, @gabrielfu)
-
Any Type in model signature - MLflow now supports AnyType in model signature. It can be used to host any data types that were not supported before. (#13766, @serena-ruan)
Other Features:
- [Tracking] Add
update_current_trace
API for adding tags to an active trace. (#13828, @B-Step62)
- [Deployments] Update databricks deployments to support AI gateway & additional update endpoints (#13513, @djliden)
- [Models] Support uv in mlflow.models.predict (#13824, @serena-ruan)
- [Models] Add type hints support including pydantic models (#13924, @serena-ruan)
- [Tracking] Add the
trace.search_spans()
method for searching spans within traces (#13984, @B-Step62)
Bug fixes:
- [Tracking] Allow passing in spark connect dataframes in mlflow evaluate API (#13889, @WeichenXu123)
- [Tracking] Fix
mlflow.end_run
inside a MLflow run context manager (#13888, @WeichenXu123)
- [Scoring] Fix spark_udf conditional check on remote spark-connect client or Databricks Serverless (#13827, @WeichenXu123)
- [Models] Allow changing max_workers for built-in LLM-as-a-Judge metrics (#13858, @B-Step62)
- [Models] Support saving all langchain runnables using code-based logging (#13821, @serena-ruan)
- [Model Registry] return empty array when DatabricksSDKModelsArtifactRepository.list_artifacts is called on a file (#14027, @shichengzhou-db)
- [Tracking] Stringify param values in client.log_batch() (#14015, @B-Step62)
- [Tracking] Remove deprecated squared parameter (#14028, @B-Step62)
- [Tracking] Fix request/response field in the search_traces output (#13985, @B-Step62)
Documentation updates:
- [Docs] Add Ollama and Instructor examples in tracing doc (#13937, @B-Step62)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
We are excited to announce the release of MLflow 2.18.0! This release includes a number of significant features, enhancements, and bug fixes.
Python Version Update
Python 3.8 is now at an end-of-life point. With official support being dropped for this legacy version, MLflow now requires Python 3.9
as a minimum supported version.
Note: If you are currently using MLflow's ChatModel
interface for authoring custom GenAI applications, please ensure that you
have read the future breaking changes section below.
Major New Features
-
🦺 Fluent API Thread/Process Safety - MLflow's fluent APIs for tracking and the model registry have been overhauled to add support for both thread and multi-process safety. You are now no longer forced to use the Client APIs for managing experiments, runs, and logging from within multiprocessing and threaded applications. (#13456, #13419, @WeichenXu123)
-
🧩 DSPy flavor - MLflow now supports logging, loading, and tracing of DSPy
models, broadening the support for advanced GenAI authoring within MLflow. Check out the MLflow DSPy Flavor documentation to get started! (#13131, #13279, #13369, #13345, @chenmoneygithub, #13543, #13800, #13807, @B-Step62, #13289, @michael-berk)
-
🖥️ Enhanced Trace UI - MLflow Tracing's UI has undergone
a significant overhaul to bring usability and quality of life updates to the experience of auditing and investigating the contents of GenAI traces, from enhanced span content rendering using markdown to a standardized span component structure. (#13685, #13357, #13242, @daniellok-db)
-
🚄 New Tracing Integrations - MLflow Tracing now supports DSPy, LiteLLM, and Google Gemini, enabling a one-line, fully automated tracing experience. These integrations unlock enhanced observability across a broader range of industry tools. Stay tuned for upcoming integrations and updates! (#13801, @TomeHirata, #13585, @B-Step62)
-
📊 Expanded LLM-as-a-Judge Support - MLflow now enhances its evaluation capabilities with support for additional providers, including Anthropic
, Bedrock
, Mistral
, and TogetherAI
, alongside existing providers like OpenAI
. Users can now also configure proxy endpoints or self-hosted LLMs that follow the provider API specs by using the new proxy_url
and extra_headers
options. Visit the LLM-as-a-Judge documentation for more details! (#13715, #13717, @B-Step62)
-
⏰ Environment Variable Detection - As a helpful reminder for when you are deploying models, MLflow now detects and reminds users of environment variables set during model logging, ensuring they are configured for deployment. In addition to this, the mlflow.models.predict
utility has also been updated to include these variables in serving simulations, improving pre-deployment validation. (#13584, @serena-ruan)
Breaking Changes to ChatModel Interface
-
ChatModel Interface Updates - As part of a broader unification effort within MLflow and services that rely on or deeply integrate
with MLflow's GenAI features, we are working on a phased approach to making a consistent and standard interface for custom GenAI
application development and usage. In the first phase (planned for release in the next few releases of MLflow), we are marking
several interfaces as deprecated, as they will be changing. These changes will be:
- Renaming of Interfaces:
ChatRequest
→ ChatCompletionRequest
to provide disambiguation for future planned request interfaces.
ChatResponse
→ ChatCompletionResponse
for the same reason as the input interface.
metadata
fields within ChatRequest
and ChatResponse
→ custom_inputs
and custom_outputs
, respectively.
- Streaming Updates:
predict_stream
will be updated to enable true streaming for custom GenAI applications. Currently, it returns a generator with synchronous outputs from predict. In a future release, it will return a generator of ChatCompletionChunks
, enabling asynchronous streaming. While the API call structure will remain the same, the returned data payload will change significantly, aligning with LangChain’s implementation.
- Legacy Dataclass Deprecation:
- Dataclasses in
mlflow.models.rag_signatures
will be deprecated, merging into unified ChatCompletionRequest
, ChatCompletionResponse
, and ChatCompletionChunks
.
Other Features:
Here is the updated section with links to each PR ID and author:
markdown
Copy code
Other Features:
- [Evaluate] Add Huggingface BLEU metrics to MLflow Evaluate (#12799, @nebrass)
- [Models / Databricks] Add support for
spark_udf
when running on Databricks Serverless runtime, Databricks Connect, and prebuilt Python environments (#13276, #13496, @WeichenXu123)
- [Scoring] Add a
model_config
parameter for pyfunc.spark_udf
for customization of batch inference payload submission (#13517, @WeichenXu123)
- [Tracing] Standardize retriever span outputs to a list of MLflow
Document
s (#13242, @daniellok-db)
- [UI] Add support for visualizing and comparing nested parameters within the MLflow UI (#13012, @jescalada)
- [UI] Add support for comparing logged artifacts within the Compare Run page in the MLflow UI (#13145, @jescalada)
- [Databricks] Add support for
resources
definitions for LangChain
model logging (#13315, @sunishsheth2009)
- [Databricks] Add support for defining multiple retrievers within
dependencies
for Agent definitions (#13246, @sunishsheth2009)
Bug fixes:
- [Database] Cascade deletes to datasets when deleting experiments to fix a bug in MLflow's
gc
command when deleting experiments with logged datasets (#13741, @daniellok-db)
- [Models] Fix a bug with
LangChain
's pyfunc
predict input conversion (#13652, @serena-ruan)
- [Models] Fix signature inference for subclasses and
Optional
dataclasses that define a model's signature (#13440, @bbqiu)
- [Tracking] Fix an issue with async logging batch splitting validation rules (#13722, @WeichenXu123)
- [Tracking] Fix an issue with
LangChain
's autologging thread-safety behavior (#13672, @B-Step62)
- [Tracking] Disable support for running Spark autologging in a threadpool due to limitations in Spark (#13599, @WeichenXu123)
- [Tracking] Mark
role
and index
as required for chat schema (#13279, @chenmoneygithub)
- [Tracing] Handle raw response in OpenAI autolog (#13802, @harupy)
- [Tracing] Fix a bug with tracing source run behavior when running inference with multithreading on
LangChain
models (#13610, @WeichenXu123)
Documentation updates:
- [Docs] Add docstring warnings for upcoming changes to ChatModel (#13730, @stevenchen-db)
- [Docs] Add a contributor's guide for implementing tracing integrations (#13333, @B-Step62)
- [Docs] Add guidance in the use of
model_config
when logging models as code (#13631, @sunishsheth2009)
- [Docs] Add documentation for the use of custom library artifacts with the
code_paths
model logging feature (#13702, @TomeHirata)
- [Docs] Improve
SparkML
log_model
documentation with guidance on how to return probabilities from classification models (#13684, @WeichenXu123)
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
MLflow 2.17.2 includes several major features and improvements
Features:
Bug fixes:
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
2.17.1 (2024-10-25)
MLflow 2.17.1 includes several major features and improvements
Features:
Bug fixes:
- [Tracking] Fix tool span inputs/outputs format in LangChain autolog (#13527, @B-Step62)
- [Models] Fix code_path handling for LlamaIndex flavor (#13486, @B-Step62)
- [Models] Fix signature inference for subclass and optional dataclasses (#13440, @bbqiu)
- [Tracking] Fix error thrown in set_retriever_schema's behavior when it's called twice (#13422, @sunishsheth2009)
- [Tracking] Fix dependency extraction from RunnableCallables (#13423, @aravind-segu)
Documentation updates:
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
2.17.0 (2024-10-11)
We are excited to announce the release of MLflow 2.17.0! This release includes several enhancements to extend the
functionality of MLflow's ChatModel interface to further extend its versatility for handling custom GenAI application use cases.
Additionally, we've improved the interface within the tracing UI to provide a structured output for retrieved documents,
enhancing the ability to read the contents of those documents within the UI.
We're also starting the work on improving both the utility and the versatility of MLflow's evaluate functionality for GenAI,
initially with support for callable GenAI evaluation metrics.
Major Features and notifications
-
ChatModel enhancements - As the GenAI-focused 'cousin' of PythonModel
, ChatModel
is getting some sizable functionality
extensions. From native support for tool calling (a requirement for creating a custom agent), simpler conversions to the
internal dataclass constructs needed to interface with ChatModel
via the introduction of from_dict
methods to all data structures,
the addition of a metadata
field to allow for full input payload customization, handling of the new refusal
response type, to the
inclusion of the interface type to the response structure to allow for greater integration compatibility.
(#13191, #13180, #13143, @daniellok-db, #13102, #13071, @BenWilson2)
-
Callable GenAI Evaluation Metrics - As the intial step in a much broader expansion of the functionalities of mlflow.evaluate
for
GenAI use cases, we've converted the GenAI evaluation metrics to be callable. This allows you to use them directly in packages that support
callable GenAI evaluation metrics, as well as making it simpler to debug individual responses when prototyping solutions. (#13144, @serena-ruan)
-
Audio file support in the MLflow UI - You can now directly 'view' audio files that have been logged and listen to them from within the MLflow UI's
artifact viewer pane.
-
MLflow AI Gateway is no longer deprecated - We've decided to revert our deprecation for the AI Gateway feature. We had renamed it to the
MLflow Deployments Server, but have reconsidered and reverted the naming and namespace back to the original configuration.
Features:
- [Tracing] Add Standardization to retriever span outputs within MLflow tracing (#13242, @daniellok-db)
- [Models] Add support for LlamaIndex
Workflows
objects to be serialized when calling log_model()
(#13277, #13305, #13336, @B-Step62)
- [Models] Add tool calling support for ChatModel (#13191, @daniellok-db)
- [Models] Add
from_dict()
function to ChatModel dataclasses (#13180, @daniellok-db)
- [Models] Add metadata field for ChatModel (#13143, @daniellok-db)
- [Models] Update ChatCompletionResponse to populate object type (#13102, @BenWilson2)
- [Models] Add support for LLM response refusal (#13071, @BenWilson2)
- [Models] Add support for resources to be passed in via
langchain.log_model()
(#13315, @sunishsheth2009)
- [Tracking] Add support for setting multiple retrievers' schema via
set_retriever_schema
(#13246, @sunishsheth2009)
- [Eval] Make Evaluation metrics callable (#13144, @serena-ruan)
- [UI] Add audio support to artifact viewer UI (#13017, @sydneyw-spotify)
- [Databricks] Add support for route_optimized parameter in databricks deployment client (#13222, @prabhatkgupta)
Bug fixes:
- [Tracking] Fix tracing for LangGraph (#13215, @B-Step62)
- [Tracking] Fix an issue with
presigned_url_artifact
requests being in the wrong format (#13366, @WeichenXu123)
- [Models] Update Databricks dependency extraction functionality to work with the
langchain-databricks
partner package. (#13266, @B-Step62)
- [Model Registry] Fix retry and credential refresh issues with artifact downloads from the model registry (#12935, @rohitarun-db)
- [Tracking] Fix LangChain autologging so that langchain-community is not required for partner packages (#13172, @B-Step62)
- [Artifacts] Fix issues with file removal for the local artifact repository (#13005, @rzalawad)
Documentation updates:
- [Docs] Add guide for building custom GenAI apps with ChatModel (#13207, @BenWilson2)
- [Docs] Add updates to the MLflow AI Gateway documentation (#13217, @daniellok-db)
- [Docs] Remove MLflow AI Gateway deprecation status (#13153, @BenWilson2)
- [Docs] Add contribution guide for MLflow tracing integrations (#13333, @B-Step62)
- [Docs] Add documentation regarding the
run_id
parameter within the search_trace
API (#13251, @B-Step62)
Please try it out and report any issues on the issue tracker.