mlflow.models
The mlflow.models
module provides an API for saving machine learning models in
“flavors” that can be understood by different downstream tools.
The built-in flavors are:
For details, see MLflow Models.
-
class
mlflow.models.
EvaluationArtifact
(uri, content=None)[source] Bases:
object
A model evaluation artifact containing an artifact uri and content.
-
class
mlflow.models.
EvaluationMetric
(eval_fn, name, greater_is_better, long_name=None)[source] Bases:
object
A model evaluation metric.
- Parameters
eval_fn –
A function that computes the metric with the following signature:
def eval_fn( eval_df: Union[pandas.Dataframe, pyspark.sql.DataFrame], builtin_metrics: Dict[str, float], ) -> float: """ :param eval_df: A Pandas or Spark DataFrame containing ``prediction`` and ``target`` column. The ``prediction`` column contains the predictions made by the model. The ``target`` column contains the corresponding labels to the predictions made on that row. :param builtin_metrics: A dictionary containing the metrics calculated by the default evaluator. The keys are the names of the metrics and the values are the scalar values of the metrics. Refer to the DefaultEvaluator behavior section for what metrics will be returned based on the type of model (i.e. classifier or regressor). :return: The metric value. """ ...
name – The name of the metric.
greater_is_better – Whether a higher value of the metric is better.
long_name – (Optional) The long name of the metric. For example,
"root_mean_squared_error"
for"mse"
.
-
class
mlflow.models.
EvaluationResult
(metrics, artifacts, baseline_model_metrics=None)[source] Bases:
object
Represents the model evaluation outputs of a mlflow.evaluate() API call, containing both scalar metrics and output artifacts such as performance plots.
-
property
artifacts
A dictionary mapping standardized artifact names (e.g. “roc_data”) to artifact content and location information
-
property
baseline_model_metrics
A dictionary mapping scalar metric names to scalar metric values for the baseline model
-
classmethod
load
(path)[source] Load the evaluation results from the specified local filesystem path
-
save
(path)[source] Write the evaluation results to the specified local filesystem path
-
property
-
class
mlflow.models.
FlavorBackend
(config, **kwargs)[source] Bases:
object
Abstract class for Flavor Backend. This class defines the API interface for local model deployment of MLflow model flavors.
-
abstract
build_image
(model_uri, image_name, install_mlflow, mlflow_home, enable_mlserver)[source]
-
can_build_image
()[source] - Returns
True if this flavor has a build_image method defined for building a docker container capable of serving the model, False otherwise.
-
abstract
can_score_model
()[source] Check whether this flavor backend can be deployed in the current environment.
- Returns
True if this flavor backend can be applied in the current environment.
-
abstract
generate_dockerfile
(model_uri, output_path, install_mlflow, mlflow_home, enable_mlserver)[source]
-
abstract
predict
(model_uri, input_path, output_path, content_type)[source] Generate predictions using a saved MLflow model referenced by the given URI. Input and output are read from and written to a file or stdin / stdout.
- Parameters
model_uri – URI pointing to the MLflow model to be used for scoring.
input_path – Path to the file with input data. If not specified, data is read from stdin.
output_path – Path to the file with output predictions. If not specified, data is written to stdout.
content_type – Specifies the input format. Can be one of {
json
,csv
}
-
prepare_env
(model_uri, capture_output=False)[source] Performs any preparation necessary to predict or serve the model, for example downloading dependencies or initializing a conda environment. After preparation, calling predict or serve should be fast.
-
abstract
serve
(model_uri, port, host, timeout, enable_mlserver, synchronous=True, stdout=None, stderr=None)[source] Serve the specified MLflow model locally.
- Parameters
model_uri – URI pointing to the MLflow model to be used for scoring.
port – Port to use for the model deployment.
host – Host to use for the model deployment. Defaults to
localhost
.timeout – Timeout in seconds to serve a request. Defaults to 60.
enable_mlserver – Whether to use MLServer or the local scoring server.
synchronous – If True, wait until server process exit and return 0, if process exit with non-zero return code, raise exception. If False, return the server process Popen instance immediately.
stdout – Redirect server stdout
stderr – Redirect server stderr
-
abstract
-
class
mlflow.models.
MetricThreshold
(threshold=None, min_absolute_change=None, min_relative_change=None, greater_is_better=None, higher_is_better=None)[source] Bases:
object
This class allows you to define metric thresholds for model validation. Allowed thresholds are: threshold, min_absolute_change, min_relative_change.
- Parameters
threshold –
(Optional) A number representing the value threshold for the metric.
If higher is better for the metric, the metric value has to be >= threshold to pass validation.
Otherwise, the metric value has to be <= threshold to pass the validation.
min_absolute_change –
(Optional) A positive number representing the minimum absolute change required for candidate model to pass validation with the baseline model.
If higher is better for the metric, metric value has to be >= baseline model metric value + min_absolute_change to pass the validation.
Otherwise, metric value has to be <= baseline model metric value - min_absolute_change to pass the validation.
min_relative_change –
(Optional) A floating point number between 0 and 1 representing the minimum relative change (in percentage of baseline model metric value) for candidate model to pass the comparison with the baseline model.
If higher is better for the metric, metric value has to be >= baseline model metric value * (1 + min_relative_change)
Otherwise, metric value has to be <= baseline model metric value * (1 - min_relative_change)
Note that if the baseline model metric value is equal to 0, the threshold falls back performing a simple verification that the candidate metric value is better than the baseline metric value, i.e. metric value >= baseline model metric value + 1e-10 if higher is better; metric value <= baseline model metric value - 1e-10 if lower is better.
greater_is_better – A required boolean representing whether higher value is better for the metric.
higher_is_better –
Deprecated since version 2.3.0: Use
greater_is_better
instead.A required boolean representing whether higher value is better for the metric.
-
property
greater_is_better
Boolean value representing whether higher value is better for the metric.
-
property
higher_is_better
Warning
mlflow.models.evaluation.validation.MetricThreshold.higher_is_better
is deprecated. This method will be removed in a future release. UseThe attribute `higher_is_better` is deprecated. Use `greater_is_better` instead.
instead.Boolean value representing whether higher value is better for the metric.
-
property
min_absolute_change
Value of the minimum absolute change required to pass model comparison with baseline model.
-
class
mlflow.models.
Model
(artifact_path=None, run_id=None, utc_time_created=None, flavors=None, signature=None, saved_input_example_info: Optional[Dict[str, Any]] = None, model_uuid: Optional[Union[str, Callable]] = <function Model.<lambda>>, mlflow_version: Optional[str] = '2.3.2', metadata: Optional[Dict[str, Any]] = None, **kwargs)[source] Bases:
object
An MLflow Model that can support multiple model flavors. Provides APIs for implementing new Model flavors.
-
add_flavor
(name, **params)[source] Add an entry for how to serve the model in a given format.
-
classmethod
from_dict
(model_dict)[source] Load a model from its YAML representation.
-
get_input_schema
()[source] Retrieves the input schema of the Model iff the model was saved with a schema definition.
-
get_output_schema
()[source] Retrieves the output schema of the Model iff the model was saved with a schema definition.
-
classmethod
load
(path)[source] Load a model from its YAML representation.
-
load_input_example
(path: str)[source] Load the input example saved along a model. Returns None if there is no example metadata (i.e. the model was saved without example). Raises FileNotFoundError if there is model metadata but the example file is missing.
- Parameters
path – Path to the model directory.
- Returns
Input example (NumPy ndarray, SciPy csc_matrix, SciPy csr_matrix, pandas DataFrame, dict) or None if the model has no example.
-
classmethod
log
(artifact_path, flavor, registered_model_name=None, await_registration_for=300, metadata=None, **kwargs)[source] Log model using supplied flavor module. If no run is active, this method will create a new active run.
- Parameters
artifact_path – Run relative path identifying the model.
flavor – Flavor module to save the model with. The module must have the
save_model
function that will persist the model as a valid MLflow model.registered_model_name – If given, create a model version under
registered_model_name
, also creating a registered model if one with the given name does not exist.signature –
ModelSignature
describes model input and outputSchema
. The model signature can beinferred
from datasets representing valid model input (e.g. the training dataset) and valid model output (e.g. model predictions generated on the training dataset), for example:from mlflow.models.signature import infer_signature train = df.drop_column("target_label") signature = infer_signature(train, model.predict(train))
input_example – Input example provides one or several examples of valid model input. The example can be used as a hint of what data to feed the model. The given example will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented format. Bytes are base64-encoded.
await_registration_for – Number of seconds to wait for the model version to finish being created and is in
READY
status. By default, the function waits for five minutes. Specify 0 or None to skip waiting.metadata –
Custom metadata dictionary passed to the model and stored in the MLmodel file.
Note
Experimental: This parameter may change or be removed in a future release without warning.
kwargs – Extra args passed to the model flavor.
- Returns
A
ModelInfo
instance that contains the metadata of the logged model.
-
property
metadata
Custom metadata dictionary passed to the model and stored in the MLmodel file.
- getter
Retrieves custom metadata that have been applied to a model instance.
- setter
Sets a dictionary of custom keys and values to be included with the model instance
- type
Optional[Dict[str, Any]]
- return
A Dictionary of user-defined metadata iff defined.
# Create and log a model with metadata to the Model Registry from sklearn import datasets from sklearn.ensemble import RandomForestClassifier import mlflow from mlflow.models.signature import infer_signature with mlflow.start_run(): iris = datasets.load_iris() clf = RandomForestClassifier() clf.fit(iris.data, iris.target) signature = infer_signature(iris.data, iris.target) mlflow.sklearn.log_model( clf, "iris_rf", signature=signature, registered_model_name="model-with-metadata", metadata={"metadata_key": "metadata_value"}, ) # model uri for the above model model_uri = "models:/model-with-metadata/1" # Load the model and access the custom metadata model = mlflow.pyfunc.load_model(model_uri=model_uri) assert model.metadata.metadata["metadata_key"] == "metadata_value"
Note
Experimental: This property may change or be removed in a future release without warning.
-
save
(path)[source] Write the model as a local YAML file.
-
property
saved_input_example_info
A dictionary that contains the metadata of the saved input example, e.g.,
{"artifact_path": "input_example.json", "type": "dataframe", "pandas_orient": "split"}
.
-
property
signature
An optional definition of the expected inputs to and outputs from a model object, defined with both field names and data types. Signatures support both column-based and tensor-based inputs and outputs.
- Getter
Retrieves the signature of a model instance iff the model was saved with a signature definition.
- Setter
Sets a signature to a model instance.
- Type
Optional[ModelSignature]
-
to_dict
()[source] Serialize the model to a dictionary.
-
to_json
()[source] Write the model as json.
-
to_yaml
(stream=None)[source] Write the model as yaml string.
-
-
class
mlflow.models.
ModelSignature
(inputs: mlflow.types.schema.Schema, outputs: Optional[mlflow.types.schema.Schema] = None)[source] Bases:
object
ModelSignature specifies schema of model’s inputs and outputs.
ModelSignature can be
inferred
from training dataset and model predictions using or constructed by hand by passing an input and outputSchema
.-
classmethod
from_dict
(signature_dict: Dict[str, Any])[source] Deserialize from dictionary representation.
- Parameters
signature_dict – Dictionary representation of model signature. Expected dictionary format: {‘inputs’: <json string>, ‘outputs’: <json string>” }
- Returns
ModelSignature populated with the data form the dictionary.
-
to_dict
() → Dict[str, Any][source] Serialize into a ‘jsonable’ dictionary.
Input and output schema are represented as json strings. This is so that the representation is compact when embedded in an MLmodel yaml file.
- Returns
dictionary representation with input and output schema represented as json strings.
-
classmethod
-
mlflow.models.
add_libraries_to_model
(model_uri, run_id=None, registered_model_name=None)[source] Note
Experimental: This function may change or be removed in a future release without warning.
Given a registered model_uri (e.g. models:/<model_name>/<model_version>), this utility re-logs the model along with all the required model libraries back to the Model Registry. The required model libraries are stored along with the model as model artifacts. In addition, supporting files to the model (e.g. conda.yaml, requirements.txt) are modified to use the added libraries.
By default, this utility creates a new model version under the same registered model specified by
model_uri
. This behavior can be overridden by specifying theregistered_model_name
argument.- Parameters
model_uri – A registered model uri in the Model Registry of the form models:/<model_name>/<model_version/stage/latest>
run_id – The ID of the run to which the model with libraries is logged. If None, the model with libraries is logged to the source run corresponding to model version specified by
model_uri
; if the model version does not have a source run, a new run created.registered_model_name – The new model version (model with its libraries) is registered under the inputted registered_model_name. If None, a new version is logged to the existing model in the Model Registry.
Note
This utility only operates on a model that has been registered to the Model Registry.
Note
The libraries are only compatible with the platform on which they are added. Cross platform libraries are not supported.
# Create and log a model to the Model Registry import pandas as pd from sklearn import datasets from sklearn.ensemble import RandomForestClassifier import mlflow import mlflow.sklearn from mlflow.models.signature import infer_signature with mlflow.start_run(): iris = datasets.load_iris() iris_train = pd.DataFrame(iris.data, columns=iris.feature_names) clf = RandomForestClassifier(max_depth=7, random_state=0) clf.fit(iris_train, iris.target) signature = infer_signature(iris_train, clf.predict(iris_train)) mlflow.sklearn.log_model( clf, "iris_rf", signature=signature, registered_model_name="model-with-libs" ) # model uri for the above model model_uri = "models:/model-with-libs/1" # Import utility from mlflow.models.utils import add_libraries_to_model # Log libraries to the original run of the model add_libraries_to_model(model_uri) # Log libraries to some run_id existing_run_id = "21df94e6bdef4631a9d9cb56f211767f" add_libraries_to_model(model_uri, run_id=existing_run_id) # Log libraries to a new run with mlflow.start_run(): add_libraries_to_model(model_uri) # Log libraries to a new registered model named 'new-model' with mlflow.start_run(): add_libraries_to_model(model_uri, registered_model_name="new-model")
-
mlflow.models.
build_docker
(model_uri=None, name='mlflow-pyfunc', env_manager='virtualenv', mlflow_home=None, install_mlflow=False, enable_mlserver=False)[source] Builds a Docker image whose default entrypoint serves an MLflow model at port 8080, using the python_function flavor. The container serves the model referenced by
model_uri
, if specified. Ifmodel_uri
is not specified, an MLflow Model directory must be mounted as a volume into the /opt/ml/model directory in the container.Warning
If
model_uri
is unspecified, the resulting image doesn’t support serving models with the RFunc or Java MLeap model servers.NB: by default, the container will start nginx and gunicorn processes. If you don’t need the nginx process to be started (for instance if you deploy your container to Google Cloud Run), you can disable it via the DISABLE_NGINX environment variable:
docker run -p 5001:8080 -e DISABLE_NGINX=true "my-image-name"
See https://www.mlflow.org/docs/latest/python_api/mlflow.pyfunc.html for more information on the ‘python_function’ flavor.
-
mlflow.models.
evaluate
(model: str, data, *, targets, model_type: str, dataset_path=None, feature_names: Optional[list] = None, evaluators=None, evaluator_config=None, custom_metrics=None, custom_artifacts=None, validation_thresholds=None, baseline_model=None, env_manager='local')[source] Evaluate a PyFunc model on the specified dataset using one or more specified
evaluators
, and log resulting metrics & artifacts to MLflow Tracking. Set thresholds on the generated metrics to validate model quality. For additional overview information, see the Model Evaluation documentation.- Default Evaluator behavior:
The default evaluator, which can be invoked with
evaluators="default"
orevaluators=None
, supports the"regressor"
and"classifier"
model types. It generates a variety of model performance metrics, model performance plots, and model explanations.For both the
"regressor"
and"classifier"
model types, the default evaluator generates model summary plots and feature importance plots using SHAP.- For regressor models, the default evaluator additionally logs:
metrics: example_count, mean_absolute_error, mean_squared_error, root_mean_squared_error, sum_on_target, mean_on_target, r2_score, max_error, mean_absolute_percentage_error.
- For binary classifiers, the default evaluator additionally logs:
metrics: true_negatives, false_positives, false_negatives, true_positives, recall, precision, f1_score, accuracy_score, example_count, log_loss, roc_auc, precision_recall_auc.
artifacts: lift curve plot, precision-recall plot, ROC plot.
- For multiclass classifiers, the default evaluator additionally logs:
metrics: accuracy_score, example_count, f1_score_micro, f1_score_macro, log_loss
artifacts: A CSV file for “per_class_metrics” (per-class metrics includes true_negatives/false_positives/false_negatives/true_positives/recall/precision/roc_auc, precision_recall_auc), precision-recall merged curves plot, ROC merged curves plot.
For sklearn models, the default evaluator additionally logs the model’s evaluation criterion (e.g. mean accuracy for a classifier) computed by model.score method.
The metrics/artifacts listed above are logged to the active MLflow run. If no active run exists, a new MLflow run is created for logging these metrics and artifacts. Note that no metrics/artifacts are logged for the
baseline_model
.Additionally, information about the specified dataset - hash, name (if specified), path (if specified), and the UUID of the model that evaluated it - is logged to the
mlflow.datasets
tag.- The available
evaluator_config
options for the default evaluator include: log_model_explainability: A boolean value specifying whether or not to log model explainability insights, default value is True.
explainability_algorithm: A string to specify the SHAP Explainer algorithm for model explainability. Supported algorithm includes: ‘exact’, ‘permutation’, ‘partition’, ‘kernel’. If not set,
shap.Explainer
is used with the “auto” algorithm, which chooses the best Explainer based on the model.explainability_nsamples: The number of sample rows to use for computing model explainability insights. Default value is 2000.
explainability_kernel_link: The kernel link function used by shap kernal explainer. Available values are “identity” and “logit”. Default value is “identity”.
max_classes_for_multiclass_roc_pr: For multiclass classification tasks, the maximum number of classes for which to log the per-class ROC curve and Precision-Recall curve. If the number of classes is larger than the configured maximum, these curves are not logged.
metric_prefix: An optional prefix to prepend to the name of each metric and artifact produced during evaluation.
log_metrics_with_dataset_info: A boolean value specifying whether or not to include information about the evaluation dataset in the name of each metric logged to MLflow Tracking during evaluation, default value is True.
pos_label: If specified, the positive label to use when computing classification metrics such as precision, recall, f1, etc. for binary classification models. For multiclass classification and regression models, this parameter will be ignored.
average: The averaging method to use when computing classification metrics such as precision, recall, f1, etc. for multiclass classification models (default:
'weighted'
). For binary classification and regression models, this parameter will be ignored.sample_weights: Weights for each sample to apply when computing model performance metrics.
- The available
- Limitations of evaluation dataset:
For classification tasks, dataset labels are used to infer the total number of classes.
For binary classification tasks, the negative label value must be 0 or -1 or False, and the positive label value must be 1 or True.
- Limitations of metrics/artifacts computation:
For classification tasks, some metric and artifact computations require the model to output class probabilities. Currently, for scikit-learn models, the default evaluator calls the
predict_proba
method on the underlying model to obtain probabilities. For other model types, the default evaluator does not compute metrics/artifacts that require probability outputs.
- Limitations of default evaluator logging model explainability insights:
The
shap.Explainer
auto
algorithm uses theLinear
explainer for linear models and theTree
explainer for tree models. Because SHAP’sLinear
andTree
explainers do not support multi-class classification, the default evaluator falls back to using theExact
orPermutation
explainers for multi-class classification tasks.Logging model explainability insights is not currently supported for PySpark models.
The evaluation dataset label values must be numeric or boolean, all feature values must be numeric, and each feature column must only contain scalar values.
- Limitations when environment restoration is enabled:
When environment restoration is enabled for the evaluated model (i.e. a non-local
env_manager
is specified), the model is loaded as a client that invokes a MLflow Model Scoring Server process in an independent Python environment with the model’s training time dependencies installed. As such, methods likepredict_proba
(for probability outputs) orscore
(computes the evaluation criterian for sklearn models) of the model become inaccessible and the default evaluator does not compute metrics or artifacts that require those methods.Because the model is an MLflow Model Server process, SHAP explanations are slower to compute. As such, model explainaibility is disabled when a non-local
env_manager
specified, unless theevaluator_config
option log_model_explainability is explicitly set toTrue
.
- Parameters
model – A pyfunc model instance, or a URI referring to such a model.
data –
One of the following:
A numpy array or list of evaluation features, excluding labels.
A Pandas DataFrame or Spark DataFrame, containing evaluation features and labels. If
feature_names
argument not specified, all columns are regarded as feature columns. Otherwise, only column names present infeature_names
are regarded as feature columns. If it is Spark DataFrame, only the first 10000 rows in the Spark DataFrame will be used as evaluation data.
targets – If
data
is a numpy array or list, a numpy array or list of evaluation labels. Ifdata
is a DataFrame, the string name of a column fromdata
that contains evaluation labels.model_type – A string describing the model type. The default evaluator supports
"regressor"
and"classifier"
as model types.dataset_path – (Optional) The path where the data is stored. Must not contain double quotes (
“
). If specified, the path is logged to themlflow.datasets
tag for lineage tracking purposes.feature_names – (Optional) If the
data
argument is a feature data numpy array or list,feature_names
is a list of the feature names for each feature. IfNone
, then thefeature_names
are generated using the formatfeature_{feature_index}
. If thedata
argument is a Pandas DataFrame or a Spark DataFrame,feature_names
is a list of the names of the feature columns in the DataFrame. IfNone
, then all columns except the label column are regarded as feature columns.evaluators – The name of the evaluator to use for model evaluation, or a list of evaluator names. If unspecified, all evaluators capable of evaluating the specified model on the specified dataset are used. The default evaluator can be referred to by the name
"default"
. To see all available evaluators, callmlflow.models.list_evaluators()
.evaluator_config – A dictionary of additional configurations to supply to the evaluator. If multiple evaluators are specified, each configuration should be supplied as a nested dictionary whose key is the evaluator name.
custom_metrics –
(Optional) A list of
EvaluationMetric
objects.import mlflow import numpy as np def root_mean_squared_error(eval_df, _builtin_metrics): return np.sqrt((np.abs(eval_df["prediction"] - eval_df["target"]) ** 2).mean) rmse_metric = mlflow.models.make_metric( eval_fn=root_mean_squared_error, greater_is_better=False, ) mlflow.evaluate(..., custom_metrics=[rmse_metric])
custom_artifacts –
(Optional) A list of custom artifact functions with the following signature:
def custom_artifact( eval_df: Union[pandas.Dataframe, pyspark.sql.DataFrame], builtin_metrics: Dict[str, float], artifacts_dir: str, ) -> Dict[str, Any]: """ :param eval_df: A Pandas or Spark DataFrame containing ``prediction`` and ``target`` column. The ``prediction`` column contains the predictions made by the model. The ``target`` column contains the corresponding labels to the predictions made on that row. :param builtin_metrics: A dictionary containing the metrics calculated by the default evaluator. The keys are the names of the metrics and the values are the scalar values of the metrics. Refer to the DefaultEvaluator behavior section for what metrics will be returned based on the type of model (i.e. classifier or regressor). :param artifacts_dir: A temporary directory path that can be used by the custom artifacts function to temporarily store produced artifacts. The directory will be deleted after the artifacts are logged. :return: A dictionary that maps artifact names to artifact objects (e.g. a Matplotlib Figure) or to artifact paths within ``artifacts_dir``. """ ...
Object types that artifacts can be represented as:
A string uri representing the file path to the artifact. MLflow will infer the type of the artifact based on the file extension.
A string representation of a JSON object. This will be saved as a .json artifact.
Pandas DataFrame. This will be resolved as a CSV artifact.
Numpy array. This will be saved as a .npy artifact.
Matplotlib Figure. This will be saved as an image artifact. Note that
matplotlib.pyplot.savefig
is called behind the scene with default configurations. To customize, either save the figure with the desired configurations and return its file path or define customizations through environment variables inmatplotlib.rcParams
.Other objects will be attempted to be pickled with the default protocol.
import mlflow import matplotlib.pyplot as plt def scatter_plot(eval_df, builtin_metrics, artifacts_dir): plt.scatter(eval_df["prediction"], eval_df["target"]) plt.xlabel("Targets") plt.ylabel("Predictions") plt.title("Targets vs. Predictions") plt.savefig(os.path.join(artifacts_dir, "example.png")) plt.close() return {"pred_target_scatter": os.path.join(artifacts_dir, "example.png")} def pred_sample(eval_df, _builtin_metrics, _artifacts_dir): return {"pred_sample": pred_sample.head(10)} mlflow.evaluate(..., custom_artifacts=[scatter_plot, pred_sample])
validation_thresholds –
(Optional) A dictionary of metric name to
mlflow.models.MetricThreshold
used for model validation. Each metric name must either be the name of a builtin metric or the name of a custom metric defined in thecustom_metrics
parameter.from mlflow.models import MetricThreshold thresholds = { "accuracy_score": MetricThreshold( # accuracy should be >=0.8 threshold=0.8, # accuracy should be at least 5 percent greater than baseline model accuracy min_absolute_change=0.05, # accuracy should be at least 0.05 greater than baseline model accuracy min_relative_change=0.05, greater_is_better=True, ), } with mlflow.start_run(): mlflow.evaluate( model=your_candidate_model, data, targets, model_type, dataset_name, evaluators, validation_thresholds=thresholds, baseline_model=your_baseline_model, )
See the Model Validation documentation for more details.
baseline_model – (Optional) A string URI referring to an MLflow model with the pyfunc flavor. If specified, the candidate
model
is compared to this baseline for model validation purposes.env_manager –
Specify an environment manager to load the candidate
model
andbaseline_model
in isolated Python evironments and restore their dependencies. Default value islocal
, and the following values are supported:virtualenv
: (Recommended) Use virtualenv to restore the python environment that was used to train the model.conda
: Use Conda to restore the software environment that was used to train the model.local
: Use the current Python environment for model inference, which may differ from the environment used to train the model and may lead to errors or invalid predictions.
- Returns
An
mlflow.models.EvaluationResult
instance containing metrics of candidate model and baseline model, and artifacts of candidate model.
-
mlflow.models.
get_model_info
(model_uri: str) → mlflow.models.model.ModelInfo[source] Get metadata for the specified model, such as its input/output signature.
- Parameters
model_uri –
The location, in URI format, of the MLflow model. For example:
/Users/me/path/to/local/model
relative/path/to/local/model
s3://my_bucket/path/to/model
runs:/<mlflow_run_id>/run-relative/path/to/model
models:/<model_name>/<model_version>
models:/<model_name>/<stage>
mlflow-artifacts:/path/to/model
For more information about supported URI schemes, see Referencing Artifacts.
- Returns
A
ModelInfo
instance that contains the metadata of the logged model.
import mlflow.models import mlflow.sklearn from sklearn.ensemble import RandomForestRegressor with mlflow.start_run() as run: params = {"n_estimators": 3, "random_state": 42} X, y = [[0, 1]], [1] signature = mlflow.models.infer_signature(X, y) rfr = RandomForestRegressor(**params).fit(X, y) mlflow.log_params(params) mlflow.sklearn.log_model(rfr, artifact_path="sklearn-model", signature=signature) model_uri = "runs:/{}/sklearn-model".format(run.info.run_id) # Get model info with model_uri model_info = mlflow.models.get_model_info(model_uri) # Get model signature directly model_signature = model_info.signature assert model_signature == signature
-
mlflow.models.
infer_pip_requirements
(model_uri, flavor, fallback=None)[source] Infers the pip requirements of the specified model by creating a subprocess and loading the model in it to determine which packages are imported.
- Parameters
model_uri – The URI of the model.
flavor – The flavor name of the model.
fallback – If provided, an unexpected error during the inference procedure is swallowed and the value of
fallback
is returned. Otherwise, the error is raised.
- Returns
A list of inferred pip requirements (e.g.
["scikit-learn==0.24.2", ...]
).
-
mlflow.models.
infer_signature
(model_input: Any, model_output: MlflowInferableDataset = None) → mlflow.models.signature.ModelSignature[source] Infer an MLflow model signature from the training data (input) and model predictions (output).
The signature represents model input and output as data frames with (optionally) named columns and data type specified as one of types defined in
mlflow.types.DataType
. This method will raise an exception if the user data contains incompatible types or is not passed in one of the supported formats listed below.- The input should be one of these:
pandas.DataFrame
pandas.Series
dictionary of { name -> numpy.ndarray}
numpy.ndarray
pyspark.sql.DataFrame
scipy.sparse.csr_matrix
scipy.sparse.csc_matrix
The element types should be mappable to one of
mlflow.types.DataType
.For pyspark.sql.DataFrame inputs, columns of type DateType and TimestampType are both inferred as type
datetime
, which is coerced to TimestampType at inference.- Parameters
model_input – Valid input to the model. E.g. (a subset of) the training dataset.
model_output – Valid model output. E.g. Model predictions for the (subset of) training dataset.
- Returns
ModelSignature
-
mlflow.models.
list_evaluators
()[source] Return a name list for all available Evaluators.
-
mlflow.models.
make_metric
(*, eval_fn, greater_is_better, name=None, long_name=None)[source] A factory function to create an
EvaluationMetric
object.- Parameters
eval_fn –
A function that computes the metric with the following signature:
def eval_fn( eval_df: Union[pandas.Dataframe, pyspark.sql.DataFrame], builtin_metrics: Dict[str, float], ) -> float: """ :param eval_df: A Pandas or Spark DataFrame containing ``prediction`` and ``target`` column. The ``prediction`` column contains the predictions made by the model. The ``target`` column contains the corresponding labels to the predictions made on that row. :param builtin_metrics: A dictionary containing the metrics calculated by the default evaluator. The keys are the names of the metrics and the values are the scalar values of the metrics. Refer to the DefaultEvaluator behavior section for what metrics will be returned based on the type of model (i.e. classifier or regressor). :return: The metric value. """ ...
greater_is_better – Whether a higher value of the metric is better.
name – The name of the metric. This argument must be specified if
eval_fn
is a lambda function or theeval_fn.__name__
attribute is not available.long_name – (Optional) The long name of the metric. For example,
"mean_squared_error"
for"mse"
.
-
mlflow.models.
validate_schema
(data: Union[pandas.core.frame.DataFrame, pandas.core.series.Series, numpy.ndarray, scipy.sparse._csc.csc_matrix, scipy.sparse._csr.csr_matrix, List[Any], Dict[str, Any], str], expected_schema: mlflow.types.schema.Schema) → None[source] Validate that the input data has the expected schema.
- Parameters
data –
Input data to be validated. Supported types are:
pandas.DataFrame
pandas.Series
numpy.ndarray
scipy.sparse.csc_matrix
scipy.sparse.csr_matrix
List[Any]
Dict[str, Any]
str
expected_schema – Expected
Schema
of the input data.
- Raises
A
mlflow.exceptions.MlflowException
. when the input data does not match the schema.
import mlflow.models # Suppose you've already got a model_uri model_info = mlflow.models.get_model_info(model_uri) # Get model signature directly model_signature = model_info.signature # validate schema mlflow.models.validate_schema(input_data, model_signature.inputs)
-
class
mlflow.models.model.
ModelInfo
(artifact_path: str, flavors: Dict[str, Any], model_uri: str, model_uuid: str, run_id: str, saved_input_example_info: Optional[Dict[str, Any]], signature, utc_time_created: str, mlflow_version: str, signature_dict: Optional[Dict[str, Any]] = None, metadata: Optional[Dict[str, Any]] = None)[source] The metadata of a logged MLflow Model.
-
property
artifact_path
Run relative path identifying the logged model.
- Getter
Retrieves the relative path of the logged model.
- Type
str
-
property
flavors
A dictionary mapping the flavor name to how to serve the model as that flavor.
- Getter
Gets the mapping for the logged model’s flavor that defines parameters used in serving of the model
- Type
Dict[str, str]
{ "python_function": { "model_path": "model.pkl", "loader_module": "mlflow.sklearn", "python_version": "3.8.10", "env": "conda.yaml", }, "sklearn": { "pickled_model": "model.pkl", "sklearn_version": "0.24.1", "serialization_format": "cloudpickle", }, }
-
property
metadata
User defined metadata added to the model.
- getter
Gets the user-defined metadata about a model
- type
Optional[Dict[str, Any]]
# Create and log a model with metadata to the Model Registry from sklearn import datasets from sklearn.ensemble import RandomForestClassifier import mlflow from mlflow.models.signature import infer_signature with mlflow.start_run(): iris = datasets.load_iris() clf = RandomForestClassifier() clf.fit(iris.data, iris.target) signature = infer_signature(iris.data, iris.target) mlflow.sklearn.log_model( clf, "iris_rf", signature=signature, registered_model_name="model-with-metadata", metadata={"metadata_key": "metadata_value"}, ) # model uri for the above model model_uri = "models:/model-with-metadata/1" # Load the model and access the custom metadata from its ModelInfo object model = mlflow.pyfunc.load_model(model_uri=model_uri) assert model.metadata.get_model_info().metadata["metadata_key"] == "metadata_value" # Load the ModelInfo and access the custom metadata model_info = mlflow.models.get_model_info(model_uri=model_uri) assert model_info.metadata["metadata_key"] == "metadata_value"
Note
Experimental: This property may change or be removed in a future release without warning.
-
property
mlflow_version
Version of MLflow used to log the model
- Getter
Gets the version of Mlflow that was installed when a model was logged
- Type
str
-
property
model_uri
The
model_uri
of the logged model in the format'runs:/<run_id>/<artifact_path>'
.- Getter
Gets the uri path of the logged model from the uri runs:/<run_id> path encapsulation
- Type
str
-
property
model_uuid
The
model_uuid
of the logged model, e.g.,'39ca11813cfc46b09ab83972740b80ca'
.- Getter
[Legacy] Gets the model_uuid (run_id) of a logged model
- Type
str
-
property
run_id
The
run_id
associated with the logged model, e.g.,'8ede7df408dd42ed9fc39019ef7df309'
- Getter
Gets the run_id identifier for the logged model
- Type
str
-
property
saved_input_example_info
A dictionary that contains the metadata of the saved input example, e.g.,
{"artifact_path": "input_example.json", "type": "dataframe", "pandas_orient": "split"}
.- Getter
Gets the input example if specified during model logging
- Type
Optional[Dict[str, str]]
-
property
signature
A
ModelSignature
that describes the model input and output.- Getter
Gets the model signature if it is defined
- Type
Optional[ModelSignature]
-
property
signature_dict
A dictionary that describes the model input and output generated by
ModelSignature.to_dict()
.- Getter
Gets the model signature as a dictionary
- Type
Optional[Dict[str, Any]]
-
property