New Features
Looking to learn about new significant releases in MLflow?
Find out about the details of major features, changes, and deprecations below.
MLflow Transformers Feature Enhancements
The transformers flavor in MLflow has gotten a significant feature overhaul.
- All supported pipeline types can now be logged without restriction
- Pipelines using foundation models can now be logged without copying the large model weights
PEFT (Parameter-Efficient Fine-Tuning) support
MLflow now natively supports PEFT (Parameter-Efficient Fine-Tuning)
models in the Transformers flavor. PEFT unlocks significantly more efficient model fine-tuning processes such as LoRA, QLoRA, and Prompt Tuning. Check out
the new QLoRA fine-tuning tutorial to learn how to
build your own cutting-edge models with MLflow and PEFT!
ChatModel Pyfunc Subclass Added
OpenAI-compatible chat models are now easier than ever to build in MLflow! ChatModel is a new Pyfunc subclass that makes it easy to deploy and serve chat models with MLflow.
Check out the new tutorial on building an OpenAI-compatible chat model using TinyLlama-1.1B-Chat!
Overhaul of MLflow Tracking UI for Deep Learning workflows
We've listened to your feedback and have put in a huge amount of new UI features designed to empower and
simplify the process of evaluating DL model training runs. Be sure to upgrade your tracking server and
benefit from all of the new UI enhancements today!
Automated model checkpointing for Deep Learning model training
When performing training of Deep Learning models with PyTorch Lightning
or Tensorflow with Keras, model checkpoint saving
is enabled, allowing for state storage during long-running training events and the ability to resume if
an issue is encountered during training.
Mistral AI added as an MLflow Deployments Provider
The MLflow Deployments Server can now
accept Mistral AI endpoints. Give their models a try today!
Keras 3 is now supported in MLflow
You can now log and deploy models in the new Keras 3 format, allowing you
to work with TensorFlow, Torch, or JAX models with a new high-level, easy-to-use suite of APIs.
MLflow now has support for OpenAI SDK 1.x
We've updated flavors that interact with the OpenAI SDK, bringing full support for the API changes with the 1.x release.
MLflow Site Overhaul

MLflow has a new homepage that has been completely modernized. Check it out today!
LangChain Autologging Support

Autologging support for LangChain is now available. Try it out the next time
that you're building a Generative AI application with Langchain!
Object and Array Support for complex Model Signatures
Complex input types for model signatures are now supported with native
support of Array and Object types.
Direct Access to OpenAI through the MLflow Deployments API

MLflow Deployments now supports direct access to OpenAI services.
MLflow Gateway renamed to MLflow Deployments Server

The previously known feature, MLflow Gateway has been refactored to the MLflow Deployments Server.
MLflow Docs Overhaul
The MLflow docs are getting a facelift with added content, tutorials, and guides. Stay tuned for further improvements to the site!
Updated Model Registry UI
A new opt-in Model Registry UI has been built that uses Aliases and Tags for managing model development. See
more about the new UI workflow in the docs.
Spark Connect support
You can now log, save, and load models trained using Spark Connect. Try out Spark 3.5 and the MLflow integration today!
AI21 Labs added as an MLflow Gateway provider
You can now use the MLflow AI Gateway to connect to LLMs hosted by AI21 Labs.
Amazon Bedrock added as an MLflow Gateway provider
You can now use the MLflow AI Gateway to connect to LLMs hosted by AWS's Bedrock service.
PaLM 2 added as an MLflow Gateway provider

You can now use the MLflow AI Gateway to connect to LLMs hosted by Google's PaLM 2 service.
Hugging Face TGI added as an MLflow Gateway provider
You can self-host your own transformers-based models from the Hugging Face Hub and directly connect to the models with the AI Gateway
with TGI.
LLM evaluation viewer added to MLflow UI
You can view your LLM evaluation results directly from the MLflow UI.
Introducting the Prompt Engineering UI

Link your MLflow Tracking Server with your MLflow AI Gateway Server to experiment, evaluate, and construct
prompts that can be compared amongst different providers without writing a single line of code.
Cloudflare R2 now supported as an artifact store
Cloudflare's R2 storage backend is now supported for use as an artifact store. To learn more about
R2, read the Cloudflare docs to get more information and to explore what is possible.
Params support for PyFunc Models
PyFunc models now support passing parameters at inference time. With this new feature,
you can define the allowable keys, with default values, for any parameters that you would like
consumers of your model to be able to override. This is particularly useful for LLMs, where you
might want to let users adjust commonly modified parameters for a model, such as token counts and temperature.
MLflow Serving support added to MLflow AI Gateway
The MLflow AI Gateway now supports defining an MLflow serving endpoint as provider. With this
new feature, you can serve any OSS transformers model that conforms to the
completions or embeddings route type
definitions.
Try it out today with our end-to-end example.
Introducing the MLflow AI Gateway
We're excited to announce the newest top-level component in the MLflow ecosystem: The AI Gateway.
With this new feature, you can create a single access point to many of the most popular LLM SaaS services available now,
simplifying interfaces, managing credentials, and providing a unified standard set of APIs to reduce the complexity of
building products and services around LLMs.
MLflow Evaluate now supports LLMs
You can now use MLflow evaluate to compare results from your favorite LLMs on a fixed prompt.
With support for many of the standard evaluation metrics for LLMs built in directly to the API, the featured
LLM modeling tasks of text summarization, text classification, question answering, and text generation allows you
to view the results of submitted text to multiple models in a single UI element.
Chart View added to the MLflow UI
You can now visualize parameters and metrics across multiple runs as a chart on the runs table.