Provides the MLflow fluent API, allowing management of an active MLflow run. For example:
import mlflow mlflow.start_run() mlflow.log_param("my", "param") mlflow.log_metric("score", 100) mlflow.end_run()
You can also use syntax like this:
with mlflow.start_run() as run: ...
which will automatically terminate the run at the end of the block.
mlflow.entities.Runto allow using python with syntax.
Log the passed-in parameter under the current run, creating a run if necessary.
- key – Parameter name (string)
- value – Parameter value (string, but will be string-ified if not)
Log the passed-in metric under the current run, creating a run if necessary.
- key – Metric name (string).
- value – Metric value (float).
Log all the contents of a local directory as artifacts of the run.
Log a local file or directory as an artifact of the currently active run.
start_run(run_uuid=None, experiment_id=None, source_name=None, source_version=None, entry_point_name=None, source_type=None, run_name=None)
Start a new MLflow run, setting it as the active run under which metrics and params will be logged. The return value can be used as a context manager within a
end_run()must be called to terminate the current run. If
run_uuidis passed or the
MLFLOW_RUN_IDenvironment variable is set,
start_runattempts to resume a run with the specified run ID (with
run_uuidtaking precedence over
MLFLOW_RUN_ID), and other parameters are ignored.
- run_uuid – If specified, get the run with the specified UUID and log metrics
and params under that run. The run’s end time is unset and its status
is set to running, but the run’s other attributes remain unchanged
source_type, etc. are not changed).
- experiment_id – Used only when
run_uuidis unspecified. ID of the experiment under which to create the current run. If unspecified, the run is created under a new experiment with a randomly generated name.
- source_name – Name of the source file or URI of the project to be associated with the run. Defaults to the current file if none provided.
- source_version – Optional Git commit hash to associate with the run.
- entry_point_name – Optional name of the entry point for to the current run.
- source_type – Integer enum value describing the type of the run
(“local”, “project”, etc.). Defaults to
mlflow.ActiveRunobject that acts as a context manager wrapping the run’s state.
- run_uuid – If specified, get the run with the specified UUID and log metrics and params under that run. The run’s end time is unset and its status is set to running, but the run’s other attributes remain unchanged (the run’s
Return the artifact URI of the currently active run. Calls to
log_artifactswrite artifact(s) to subdirectories of the returned URI.
Set the tracking server URI to the passed-in value. This does not affect the currently active run (if one exists), but takes effect for any successive runs.
The provided URI can be one of three types:
- An empty string, or a local file path, prefixed with
file:/. Data is stored locally at the provided file (or
- An HTTP URI like
- A Databricks workspace, provided as just the string ‘databricks’ or, to use a specific Databricks profile (per the Databricks CLI), ‘databricks://profileName’.
- An empty string, or a local file path, prefixed with
run(uri, entry_point='main', version=None, parameters=None, experiment_id=None, mode=None, cluster_spec=None, git_username=None, git_password=None, use_conda=True, storage_dir=None, block=True, run_id=None)
Run an MLflow project from the given URI.
Supports downloading projects from Git URIs with a specified version, or copying them from the file system. For Git-based projects, a commit can be specified as the
ExecutionException – If a run launched in blocking mode is unsuccessful.
- uri – URI of project to run. Expected to be either a relative/absolute local filesystem path or a git repository URI (e.g. https://github.com/mlflow/mlflow-example) pointing to a project directory containing an MLproject file.
- entry_point – Entry point to run within the project. If no entry point with the specified
name is found, attempts to run the project file
entry_pointas a script, using “python” to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files.
- experiment_id – ID of experiment under which to launch the run.
- mode – Execution mode for the run. Can be set to “local” or “databricks”.
- cluster_spec – Path to JSON file describing the cluster to use when launching a run on Databricks.
- git_username – Username for HTTP(S) authentication with Git.
- git_password – Password for HTTP(S) authentication with Git.
- use_conda – If True (the default), creates a new Conda environment for the run and installs project dependencies within that environment. Otherwise, runs the project in the current environment without installing any project dependencies.
- storage_dir – Only used if
modeis local. MLflow will download artifacts from distributed URIs passed to parameters of type ‘path’ to subdirectories of
- block – Whether or not to block while waiting for a run to complete. Defaults to True.
Note that if
blockis False and mode is “local”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated.
- run_id – Note: this argument is used internally by the MLflow project APIs and should not be specified. If specified, the given run ID will be used instead of creating a new run.
SubmittedRunexposing information (e.g. run ID) about the launched run. The returned
SubmittedRunis not thread-safe.