Provides the MLflow fluent API, allowing management of an active MLflow run. For example:

import mlflow
mlflow.log_param("my", "param")
mlflow.log_metric("score", 100)

You can also use syntax like this:

with mlflow.start_run() as run:

which will automatically terminate the run at the end of the block.

class mlflow.ActiveRun(run)


Wrapper around mlflow.entities.Run to allow using python with syntax.

mlflow.log_param(key, value)

Log the passed-in parameter under the current run, creating a run if necessary.

  • key – Parameter name (string)
  • value – Parameter value (string, but will be string-ified if not)
mlflow.log_metric(key, value)

Log the passed-in metric under the current run, creating a run if necessary.

  • key – Metric name (string).
  • value – Metric value (float).
mlflow.log_artifacts(local_dir, artifact_path=None)

Log all the contents of a local directory as artifacts of the run.

mlflow.log_artifact(local_path, artifact_path=None)

Log a local file or directory as an artifact of the currently active run.


Return the currently active Run, or None if no such run exists.

mlflow.start_run(run_uuid=None, experiment_id=None, source_name=None, source_version=None, entry_point_name=None, source_type=None, run_name=None)

Start a new MLflow run, setting it as the active run under which metrics and params will be logged. The return value can be used as a context manager within a with block; otherwise, end_run() must be called to terminate the current run. If run_uuid is passed or the MLFLOW_RUN_ID environment variable is set, start_run attempts to resume a run with the specified run ID (with run_uuid taking precedence over MLFLOW_RUN_ID), and other parameters are ignored.

  • run_uuid – If specified, get the run with the specified UUID and log metrics and params under that run. The run’s end time is unset and its status is set to running, but the run’s other attributes remain unchanged (the run’s source_version, source_type, etc. are not changed).
  • experiment_id – Used only when run_uuid is unspecified. ID of the experiment under which to create the current run. If unspecified, the run is created under a new experiment with a randomly generated name.
  • source_name – Name of the source file or URI of the project to be associated with the run. Defaults to the current file if none provided.
  • source_version – Optional Git commit hash to associate with the run.
  • entry_point_name – Optional name of the entry point for to the current run.
  • source_type – Integer enum value describing the type of the run (“local”, “project”, etc.). Defaults to mlflow.entities.SourceType.LOCAL.

mlflow.ActiveRun object that acts as a context manager wrapping the run’s state.


Return the artifact URI of the currently active run. Calls to log_artifact and log_artifacts write artifact(s) to subdirectories of the returned URI.


Set the tracking server URI to the passed-in value. This does not affect the currently active run (if one exists), but takes effect for any successive runs.

The provided URI can be one of three types:

  • An empty string, or a local file path, prefixed with file:/. Data is stored locally at the provided file (or ./mlruns if empty).
  • An HTTP URI like https://my-tracking-server:5000.
  • A Databricks workspace, provided as just the string ‘databricks’ or, to use a specific Databricks profile (per the Databricks CLI), ‘databricks://profileName’.
mlflow.create_experiment(name, artifact_location=None), entry_point='main', version=None, parameters=None, experiment_id=None, mode=None, cluster_spec=None, git_username=None, git_password=None, use_conda=True, storage_dir=None, block=True, run_id=None)

Run an MLflow project from the given URI.

Supports downloading projects from Git URIs with a specified version, or copying them from the file system. For Git-based projects, a commit can be specified as the version.


ExecutionException – If a run launched in blocking mode is unsuccessful.

  • uri – URI of project to run. Expected to be either a relative/absolute local filesystem path or a git repository URI (e.g. pointing to a project directory containing an MLproject file.
  • entry_point – Entry point to run within the project. If no entry point with the specified name is found, attempts to run the project file entry_point as a script, using “python” to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files.
  • experiment_id – ID of experiment under which to launch the run.
  • mode – Execution mode for the run. Can be set to “local” or “databricks”.
  • cluster_spec – Path to JSON file describing the cluster to use when launching a run on Databricks.
  • git_username – Username for HTTP(S) authentication with Git.
  • git_password – Password for HTTP(S) authentication with Git.
  • use_conda – If True (the default), creates a new Conda environment for the run and installs project dependencies within that environment. Otherwise, runs the project in the current environment without installing any project dependencies.
  • storage_dir – Only used if mode is local. MLflow will download artifacts from distributed URIs passed to parameters of type ‘path’ to subdirectories of storage_dir.
  • block – Whether or not to block while waiting for a run to complete. Defaults to True. Note that if block is False and mode is “local”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated.
  • run_id – Note: this argument is used internally by the MLflow project APIs and should not be specified. If specified, the given run ID will be used instead of creating a new run.

A SubmittedRun exposing information (e.g. run ID) about the launched run. The returned SubmittedRun is not thread-safe.