mlflow

The mlflow module provides an API for starting and managing MLflow runs. For example:

import mlflow
mlflow.start_run()
mlflow.log_param("my", "param")
mlflow.log_metric("score", 100)
mlflow.end_run()

You can also use syntax like this:

with mlflow.start_run() as run:
    ...

which automatically terminates the run at the end of the block.

The tracking API is not currently threadsafe. Any concurrent callers to the tracking API must implement mutual exclusion manually.

For a lower level API, see the mlflow.tracking module.

class mlflow.ActiveRun(run)

Bases: mlflow.entities.run.Run

Wrapper around mlflow.entities.Run to enable using Python with syntax.

mlflow.log_param(key, value)

Log a parameter under the current run, creating a run if necessary.

Parameters:
  • key – Parameter name (string)
  • value – Parameter value (string, but will be string-ified if not)
mlflow.log_metric(key, value, step=None)

Log a metric under the current run, creating a run if necessary.

Parameters:
  • key – Metric name (string).
  • value – Metric value (float).
  • step – Metric step (int). Defaults to zero if unspecified.
mlflow.set_tag(key, value)

Set a tag under the current run, creating a run if necessary.

Parameters:
  • key – Tag name (string)
  • value – Tag value (string, but will be string-ified if not)
mlflow.log_artifacts(local_dir, artifact_path=None)

Log all the contents of a local directory as artifacts of the run.

Parameters:
  • local_dir – Path to the directory of files to write.
  • artifact_path – If provided, the directory in artifact_uri to write to.
mlflow.log_artifact(local_path, artifact_path=None)

Log a local file or directory as an artifact of the currently active run.

Parameters:
  • local_path – Path to the file to write.
  • artifact_path – If provided, the directory in artifact_uri to write to.
mlflow.active_run()

Get the currently active Run, or None if no such run exists.

mlflow.start_run(run_id=None, experiment_id=None, run_name=None, nested=False)

Start a new MLflow run, setting it as the active run under which metrics and parameters will be logged. The return value can be used as a context manager within a with block; otherwise, you must call end_run() to terminate the current run.

If you pass a run_id or the MLFLOW_RUN_ID environment variable is set, start_run attempts to resume a run with the specified run ID and other parameters are ignored. run_id takes precedence over MLFLOW_RUN_ID.

MLflow sets a variety of default tags on the run, as defined in MLflow system tags.

Parameters:
  • run_id – If specified, get the run with the specified UUID and log parameters and metrics under that run. The run’s end time is unset and its status is set to running, but the run’s other attributes (source_version, source_type, etc.) are not changed.
  • experiment_id – ID of the experiment under which to create the current run (applicable only when run_id is not specified). If experiment_id argument is unspecified, will look for valid experiment in the following order: activated using set_experiment, MLFLOW_EXPERIMENT_NAME environment variable, MLFLOW_EXPERIMENT_ID environment variable, or the default experiment as defined by the tracking server.
  • run_name – Name of new run (stored as a mlflow.runName tag). Used only when run_id is unspecified.
  • nested – Controls whether run is nested in parent run. True creates a nest run.
Returns:

mlflow.ActiveRun object that acts as a context manager wrapping the run’s state.

mlflow.end_run(status='FINISHED')

End an active MLflow run (if there is one).

mlflow.get_artifact_uri(artifact_path=None)

Get the absolute URI of the specified artifact in the currently active run. If path is not specified, the artifact root URI of the currently active run will be returned; calls to log_artifact and log_artifacts write artifact(s) to subdirectories of the artifact root URI.

Parameters:artifact_path – The run-relative artifact path for which to obtain an absolute URI. For example, “path/to/artifact”. If unspecified, the artifact root URI for the currently active run will be returned.
Returns:An absolute URI referring to the specified artifact or the currently adtive run’s artifact root. For example, if an artifact path is provided and the currently active run uses an S3-backed store, this may be a uri of the form s3://<bucket_name>/path/to/artifact/root/path/to/artifact. If an artifact path is not provided and the currently active run uses an S3-backed store, this may be a URI of the form s3://<bucket_name>/path/to/artifact/root.
mlflow.set_tracking_uri(uri)

Set the tracking server URI. This does not affect the currently active run (if one exists), but takes effect for successive runs.

Parameters:uri
  • An empty string, or a local file path, prefixed with file:/. Data is stored locally at the provided file (or ./mlruns if empty).
  • An HTTP URI like https://my-tracking-server:5000.
  • A Databricks workspace, provided as the string “databricks” or, to use a Databricks CLI profile, “databricks://<profileName>”.
mlflow.create_experiment(name, artifact_location=None)

Create an experiment.

Parameters:
  • name – The experiment name. Must be unique.
  • artifact_location – The location to store run artifacts. If not provided, the server picks an appropriate default.
Returns:

Integer ID of the created experiment.

mlflow.set_experiment(experiment_name)

Set given experiment as active experiment. If experiment does not exist, create an experiment with provided name.

Parameters:experiment_name – Name of experiment to be activated.
mlflow.run(uri, entry_point='main', version=None, parameters=None, experiment_name=None, experiment_id=None, backend=None, backend_config=None, use_conda=True, storage_dir=None, synchronous=True, run_id=None)

Run an MLflow project. The project can be local or stored at a Git URI.

You can run the project locally or remotely on a Databricks.

For information on using this method in chained workflows, see Building Multistep Workflows.

Raises:

ExecutionException – If a run launched in blocking mode is unsuccessful.

Parameters:
  • uri – URI of project to run. A local filesystem path or a Git repository URI (e.g. https://github.com/mlflow/mlflow-example) pointing to a project directory containing an MLproject file.
  • entry_point – Entry point to run within the project. If no entry point with the specified name is found, runs the project file entry_point as a script, using “python” to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files.
  • version – For Git-based projects, either a commit hash or a branch name.
  • experiment_name – Name of experiment under which to launch the run.
  • experiment_id – ID of experiment under which to launch the run.
  • backend – Execution backend for the run: “local” or “databricks”. If running against Databricks, will run against a Databricks workspace determined as follows: if a Databricks tracking URI of the form databricks://profile has been set (e.g. by setting the MLFLOW_TRACKING_URI environment variable), will run against the workspace specified by <profile>. Otherwise, runs against the workspace specified by the default Databricks CLI profile.
  • backend_config – A dictionary, or a path to a JSON file (must end in ‘.json’), which will be passed as config to the backend. For the Databricks backend, this should be a cluster spec: see Databricks Cluster Specs for Jobs for more information.
  • use_conda – If True (the default), create a new Conda environment for the run and install project dependencies within that environment. Otherwise, run the project in the current environment without installing any project dependencies.
  • storage_dir – Used only if backend is “local”. MLflow downloads artifacts from distributed URIs passed to parameters of type path to subdirectories of storage_dir.
  • synchronous – Whether to block while waiting for a run to complete. Defaults to True. Note that if synchronous is False and backend is “local”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated.
  • run_id – Note: this argument is used internally by the MLflow project APIs and should not be specified. If specified, the run ID will be used instead of creating a new run.
Returns:

mlflow.projects.SubmittedRun exposing information (e.g. run ID) about the launched run.