MLflow Models
An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example, real-time serving through a REST API or batch inference on Apache Spark. The format defines a convention that lets you save a model in different “flavors” that can be understood by different downstream tools.
Table of Contents
Storage Format
Each MLflow Model is a directory containing arbitrary files, together with an MLmodel
file in the root of the directory that can define multiple flavors that the model can be viewed
in.
Flavors are the key concept that makes MLflow Models powerful: they are a convention that deployment
tools can use to understand the model, which makes it possible to write tools that work with models
from any ML library without having to integrate each tool with each library. MLflow defines
several “standard” flavors that all of its built-in deployment tools support, such as a “Python
function” flavor that describes how to run the model as a Python function. However, libraries can
also define and use other flavors. For example, MLflow’s mlflow.sklearn
library allows
loading models back as a scikit-learn Pipeline
object for use in code that is aware of
scikit-learn, or as a generic Python function for use in tools that just need to apply the model
(for example, the mlflow sagemaker
tool for deploying models to Amazon SageMaker).
All of the flavors that a particular model supports are defined in its MLmodel
file in YAML
format. For example, mlflow.sklearn
outputs models as follows:
# Directory written by mlflow.sklearn.save_model(model, "my_model")
my_model/
├── MLmodel
└── model.pkl
And its MLmodel
file describes two flavors:
time_created: 2018-05-25T17:28:53.35
flavors:
sklearn:
sklearn_version: 0.19.1
pickled_model: model.pkl
python_function:
loader_module: mlflow.sklearn
This model can then be used with any tool that supports either the sklearn
or
python_function
model flavor. For example, the mlflow sklearn
command can serve a
model with the sklearn
flavor:
mlflow sklearn serve my_model
In addition, the mlflow sagemaker
command-line tool can package and deploy models to AWS
SageMaker as long as they support the python_function
flavor:
mlflow sagemaker deploy -m my_model [other options]
Fields in the MLmodel Format
Apart from a flavors field listing the model flavors, the MLmodel YAML format can contain the following fields:
- time_created
- Date and time when the model was created, in UTC ISO 8601 format.
- run_id
- ID of the run that created the model, if the model was saved using MLflow Tracking.
Model API
You can save and load MLflow Models in multiple ways. First, MLflow includes integrations with
several common libraries. For example, mlflow.sklearn
contains
save_model
, log_model
,
and load_model
functions for scikit-learn models. Second,
you can use the mlflow.models.Model
class to create and write models. This
class has four key functions:
add_flavor
to add a flavor to the model. Each flavor has a string name and a dictionary of key-value attributes, where the values can be any object that can be serialized to YAML.save
to save the model to a local directory.log
to log the model as an artifact in the current run using MLflow Tracking.load
to load a model from a local directory or from an artifact in a previous run.
Built-In Model Flavors
MLflow provides several standard flavors that might be useful in your applications. Specifically, many of its deployment tools support these flavors, so you can export your own model in one of these flavors to benefit from all these tools.
Python Function (python_function
)
The python_function
model flavor defines a generic filesystem format for Python models and provides utilities
for saving and loading models to and from this format. The format is self-contained in the sense
that it includes all the information necessary to load and use a model. Dependencies
are stored either directly with the model or referenced via Conda environment.
The convention for python_function
models is to have a predict
method or function with the following
signature:
predict(data: pandas.DataFrame) -> [pandas.DataFrame | numpy.array]
Other MLflow components expect python_function
models to follow this convention.
The python_function
model format is defined as a directory structure containing all required data, code, and
configuration:
./dst-path/
./MLmodel: configuration
<code>: code packaged with the model (specified in the MLmodel file)
<data>: data packaged with the model (specified in the MLmodel file)
<env>: Conda environment definition (specified in the MLmodel file)
A python_function
model directory must contain an MLmodel
file in its root with “python_function” format and the following parameters:
- loader_module [required]:
Python module that can load the model. Expected to be a module identifier (for example,
mlflow.sklearn
) importable viaimportlib.import_module
. The imported module must contain a function with the following signature:_load_pyfunc(path: string) -> <pyfunc model>
The path argument is specified by the
data
parameter and may refer to a file or directory.
- code [optional]:
A relative path to a directory containing the code packaged with this model. All files and directories inside this directory are added to the Python path prior to importing the model loader.
- data [optional]:
A relative path to a file or directory containing model data. The path is passed to the model loader.
- env [optional]:
A relative path to an exported Conda environment. If present this environment is activated prior to running the model.
Example
tree example/sklearn_iris/mlruns/run1/outputs/linear-lr
├── MLmodel
├── code
│ ├── sklearn_iris.py
│
├── data
│ └── model.pkl
└── mlflow_env.yml
cat example/sklearn_iris/mlruns/run1/outputs/linear-lr/MLmodel
python_function:
code: code
data: data/model.pkl
loader_module: mlflow.sklearn
env: mlflow_env.yml
main: sklearn_iris
For more information, see mlflow.pyfunc
.
H2O (h2o
)
The H2O model flavor enables logging and loading H2O models. These models will be saved by using the mlflow.h2o.save_model
. Using mlflow.h2o.log_model
will also enable a valid Python Function
flavor.
When loading a H2O model as a PyFunc model, h2o.init(...)
will be called. Therefore, the right version of h2o(-py) has to be in the environment. The arguments given to h2o.init(...)
can be customized in model.h2o/h2o.yaml
under the key init
. For more information, see mlflow.h2o
.
Keras (keras
)
The keras
model flavor enables logging and loading Keras models. This model will be saved in a HDF5 file format, via the model_save functionality provided by Keras. Additionally, model can be loaded back as Python Function
. For more information, see mlflow.keras
.
MLeap (mleap
)
The mleap
model flavor supports saving models using the MLeap persistence mechanism. A companion module for loading MLflow models with the MLeap flavor format is available in the mlflow/java
package. For more information, see mlflow.mleap
.
PyTorch (pytorch
)
The pytorch
model flavor enables logging and loading PyTorch models. Model is completely stored in .pth format using torch.save(model) method. Given a directory containing a saved model, you can log the model to MLflow via log_saved_model
. The saved model can then be loaded for inference via mlflow.pyfunc.load_pyfunc()
. For more information, see mlflow.pytorch
.
Scikit-learn (sklearn
)
The sklearn
model flavor provides an easy to use interface for handling scikit-learn models with no
external dependencies. It saves and loads models using Python’s pickle module and also generates a valid
python_function
flavor model. For more information, see mlflow.sklearn
.
Spark MLlib (spark
)
The spark
model flavor enables exporting Spark MLlib models as MLflow models. Exported models are
saved using Spark MLLib’s native serialization, and can then be loaded back as MLlib models or
deployed as python_function
models. When deployed as a python_function
, the model creates its own
SparkContext and converts pandas DataFrame input to a Spark DataFrame before scoring. While this is not
the most efficient solution, especially for real-time scoring, it enables you to easily deploy any MLlib PipelineModel
(as long as the PipelineModel has no external JAR dependencies) to any endpoint supported by
MLflow. For more information, see mlflow.spark
.
TensorFlow (tensorflow
)
The tensorflow
model flavor enables logging TensorFlow Saved Models
and loading them back as Python Function
models for inference on pandas DataFrames. Given a directory containing a saved model, you can log the model to MLflow via log_saved_model
and then load the saved model for inference using mlflow.pyfunc.load_pyfunc
. For more information, see mlflow.tensorflow
.
Custom Flavors
You can add a flavor in MLmodel files, either by writing it directly or
building it with the mlflow.models.Model
class. Choose an arbitrary string name
for your flavor. MLflow tools ignore flavors in the MLmodel file that they do not understand.
Built-In Deployment Tools
MLflow provides tools for deploying models on a local machine and to several production environments. Not all deployment methods are available for all model flavors. Deployment is supported for the Python Function format and all compatible formats.
Deploy a python_function
model as a local REST API endpoint
MLflow can deploy models locally as local REST API endpoints or to directly score CSV files.
This functionality is a convenient way of testing models before deploying to a remote model server.
You deploy the Python Function flavor locally using the CLI interface to the mlflow.pyfunc
module.
The local REST API server accepts the following data formats as inputs:
- JSON-serialized Pandas DataFrames in the
split
orientation. For example,data = pandas_df.to_json(orient='split')
. This format is specified using aContent-Type
request header value ofapplication/json; format=pandas-split
. Starting in MLflow 0.9.0, this will be the default format ifContent-Type
isapplication/json
(i.e, with no format specification).- JSON-serialized Pandas DataFrames in the
records
orientation. We do not recommend using this format because it is not guaranteed to preserve column ordering. Currently, this format is specified using aContent-Type
request header value ofapplication/json; format=pandas-records
orapplication/json
. Starting in MLflow 0.9.0,application/json
will refer to thesplit
format instead. For forwards compatibility, we recommend using thesplit
format or specifying theapplication/json; format=pandas-records
content type.- CSV-serialized Pandas DataFrames. For example,
data = pandas_df.to_csv()
. This format is specified using aContent-Type
request header value oftext/csv
.
For more information about serializing Pandas DataFrames, see https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html
serve
deploys the model as a local REST API server.predict
uses the model to generate a prediction for a local CSV file.
For more info, see:
mlflow pyfunc --help
mlflow pyfunc serve --help
mlflow pyfunc predict --help
Microsoft Azure ML
The mlflow.azureml
module can package python_function
models into Azure ML container images.
These images can be deployed to Azure Kubernetes Service (AKS) and the Azure Container Instances (ACI)
platform for real-time serving. The resulting Azure ML ContainerImage will contain a webserver that
accepts the following data formats as input:
- JSON-serialized Pandas DataFrames in the
split
orientation. For example,data = pandas_df.to_json(orient='split')
. This format is specified using aContent-Type
request header value ofapplication/json
.
build_image
registers an MLflow model with an existing Azure ML workspace and builds an Azure ML container image for deployment to AKS and ACI. The Azure ML SDK is required in order to use this function. The Azure ML SDK requires Python 3. It cannot be installed with earlier versions of Python.
Deployment example (Python API):
import mlflow.azureml
from azureml.core import Workspace
from azureml.core.webservice import AciWebservice, Webservice
# Create or load an existing Azure ML workspace. You can also load an existing workspace using
# Workspace.get(name="<workspace_name>")
workspace_name = "<Name of your Azure ML workspace>"
subscription_id = "<Your Azure subscription ID>"
resource_group = "<Name of the Azure resource group in which to create Azure ML resources>"
location = "<Name of the Azure location (region) in which to create Azure ML resources>"
azure_workspace = Workspace.create(name=workspace_name,
subscription_id=subscription_id,
resource_group=resource_group,
location=location,
create_resource_group=True,
exist_okay=True)
# Build an Azure ML container image for deployment
azure_image, azure_model = mlflow.azureml.build_image(model_path="<path-to-model>",
workspace=azure_workspace,
description="Wine regression model 1",
synchronous=True)
# If your image build failed, you can access build logs at the following URI:
print("Access the following URI for build logs: {}".format(azure_image.image_build_log_uri))
# Deploy the container image to ACI
webservice_deployment_config = AciWebservice.deploy_configuration()
webservice = Webservice.deploy_from_image(
image=azure_image, workspace=azure_workspace, name="<deployment-name>")
webservice.wait_for_deployment()
# After the image deployment completes, requests can be posted via HTTP to the new ACI
# webservice's scoring URI. The following example posts a sample input from the wine dataset
# used in the MLflow ElasticNet example:
# https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine
print("Scoring URI is: %s", webservice.scoring_uri)
import requests
import json
# `sample_input` is a JSON-serialized Pandas DatFrame with the `split` orientation
sample_input = {
"columns": [
"alcohol",
"chlorides",
"citric acid",
"density",
"fixed acidity",
"free sulfur dioxide",
"pH",
"residual sugar",
"sulphates",
"total sulfur dioxide",
"volatile acidity"
],
"data": [
[8.8, 0.045, 0.36, 1.001, 7, 45, 3, 20.7, 0.45, 170, 0.27]
]
}
response = requests.post(
url=webservice.scoring_uri, data=json.dumps(sample_input),
headers={"Content-type": "application/json"})
response_json = json.loads(response.text)
print(response_json)
Deployment example (CLI):
mlflow azureml build-image -w <workspace-name> -m <model-path> -d "Wine regression model 1"
az ml service create aci -n <deployment-name> --image-id <image-name>:<image-version>
# After the image deployment completes, requests can be posted via HTTP to the new ACI
# webservice's scoring URI. The following example posts a sample input from the wine dataset
# used in the MLflow ElasticNet example:
# https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine
scoring_uri=$(az ml service show --name <deployment-name> -v | jq -r ".scoringUri")
# `sample_input` is a JSON-serialized Pandas DatFrame with the `split` orientation
sample_input='
{
"columns": [
"alcohol",
"chlorides",
"citric acid",
"density",
"fixed acidity",
"free sulfur dioxide",
"pH",
"residual sugar",
"sulphates",
"total sulfur dioxide",
"volatile acidity"
],
"data": [
[8.8, 0.045, 0.36, 1.001, 7, 45, 3, 20.7, 0.45, 170, 0.27]
]
}'
echo $sample_input | curl -s -X POST $scoring_uri\
-H 'Cache-Control: no-cache'\
-H 'Content-Type: application/json'\
-d @-
For more info, see:
mlflow azureml --help
mlflow azureml build-image --help
Deploy a python_function
model on Amazon SageMaker
The mlflow.sagemaker
module can deploy python_function
models locally in a Docker
container with SageMaker compatible environment and remotely on SageMaker.
To deploy remotely to SageMaker you need to set up your environment and user accounts.
To export a custom model to SageMaker, you need a MLflow-compatible Docker image to be available on Amazon ECR.
MLflow provides a default Docker image definition; however, it is up to you to build the image and upload it to ECR.
MLflow includes the utility function build_and_push_container
to perform this step. Once built and uploaded, you can use the MLflow
container for all MLflow models. Model webservers deployed using the mlflow.sagemaker
module accept the following data formats as input, depending on the deployment flavor:
python_function
: For this deployment flavor, The endpoint accepts the same formats as the pyfunc server. These formats are described in the pyfunc deployment documentation.mleap
: For this deployment flavor, the endpoint accepts only JSON-serialized Pandas DataFrames in thesplit
orientation. For example,data = pandas_df.to_json(orient='split')
. This format is specified using aContent-Type
request header value ofapplication/json
.
run-local
deploys the model locally in a Docker container. The image and the environment should be identical to how the model would be run remotely and it is therefore useful for testing the model prior to deployment.- The
build-and-push-container
CLI command builds an MLfLow Docker image and uploads it to ECR. The caller must have the correct permissions set up. The image is built locally and requires Docker to be present on the machine that performs this step. deploy
deploys the model on Amazon SageMaker. MLflow uploads the Python Function model into S3 and starts an Amazon SageMaker endpoint serving the model.
Example workflow using the MLflow CLI
mlflow sagemaker build-and-push-container - build the container (only needs to be called once)
mlflow sagemaker run-local -m <path-to-model> - test the model locally
mlflow sagemaker deploy <parameters> - deploy the model remotely
For more info, see:
mlflow sagemaker --help
mlflow sagemaker build-and-push-container --help
mlflow sagemaker run-local --help
mlflow sagemaker deploy --help
Export a python_function
model as an Apache Spark UDF
You can output a python_function
model as an Apache Spark UDF, which can be uploaded to a
Spark cluster and used to score the model.
Example
pyfunc_udf = mlflow.pyfunc.spark_udf(<path-to-model>)
df = spark_df.withColumn("prediction", pyfunc_udf(<features>))