The MLflow Tracking component lets you log and query experiments using either REST or Python.
MLflow Tracking is organized around the concept of runs, which are executions of some piece of data science code. Each run records the following information:
- Code Version
- Git commit used to execute the run, if it was executed from an MLflow Project.
- Start & End Time
- Start and end time of the run
- Name of the file executed to launch the run, or the project name and entry point for the run if the run was executed from an MLflow Project.
- Key-value input parameters of your choice. Both keys and values are strings.
- Key-value metrics where the value is numeric. Each metric can be updated throughout the course of the run (for example, to track how your model’s loss function is converging), and MLflow will record and let you visualize the metric’s full history.
- Output files in any format. For example, you can record images (for example, PNGs), models (for example, a pickled SciKit-Learn model) or even data files (for example, a Parquet file) as artifacts.
Runs can be recorded from anywhere you run your code through MLflow’s Python or REST APIs: for example, you can record them in a standalone program, on a remote cloud machine, or in an interactive notebook. If you record runs in an MLflow Project, however, MLflow remembers the project URI and source version.
Finally, runs can optionally be organized into experiments, which group together runs for a
specific task. You can create an experiment via the
mlflow experiments CLI, with
mlflow.create_experiment(), or via the corresponding REST parameters. The MLflow UI and
API let you create and search for experiments.
Once your runs have been recorded, you can query them using the Tracking UI or the MLflow API.
Where Runs Get Recorded¶
MLflow runs can be recorded either locally in files or remotely to a Tracking Server.
By default, the MLflow Python API logs runs to files in an
mlruns directory wherever you
ran your program. You can then run
mlflow ui to see the logged runs. Set the
MLFLOW_TRACKING_URI environment variable to a server’s URI or call
mlflow.set_tracking_uri() to log runs remotely.
You can also run your own tracking server to record runs.
Logging Data to Runs¶
You can log data to runs using either the MLflow REST API or the Python API. In this section, we show the Python API, but there are corresponding REST APIs as well.
Basic Logging Functions¶
mlflow.set_tracking_uri() connects to a tracking URI. You can also set the
MLFLOW_TRACKING_URI environment variable to have MLflow find a URI from there. In both cases,
the URI can either be a HTTP/HTTPS URI for a remote server, or a local path to log data to a
directory. The URI defaults to
mlflow.get_tracking_uri() returns the current tracking URI.
mlflow.create_experiment() creates a new experiment and returns its ID. Runs can be
launched under the experiment by passing the experiment ID to mlflow.start_run
mlflow.start_run() returns the currently active run (if one exists), or starts a new run
and returns a
mlflow.tracking.ActiveRun object usable as a context manager for the
current run. You do not need to call start_run explicitly: calling one of the logging functions
with no active run will automatically start a new one.
mlflow.end_run() ends the currently active run, if any, taking an optional run status.
mlflow.active_run() returns a
mlflow.tracking.Run object corresponding to the
currently active run, if any.
mlflow.log_param() logs a key-value parameter in the currently active run. The keys and
values are both strings.
mlflow.log_metric() logs a key-value metric. The value must always be a number. MLflow will
remember the history of values for each metric.
mlflow.log_artifact() logs a local file as an artifact, optionally taking an
artifact_path to place it in within the run’s artifact URI. Run artifacts can be organized into
directories, so you can place the artifact in a directory this way.
mlflow.log_artifacts() logs all the files in a given directory as artifacts, again taking
mlflow.get_artifact_uri() returns the URI that artifacts from the current run should be
Launching Multiple Runs in One Program¶
Sometimes you want to execute multiple MLflow runs in the same program: for example, maybe you are
performing a hyperparameter search locally or your experiments are just very fast to run. This is
easy to do because the ActiveRun object returned by
mlflow.start_run() is a Python
context manager. You can “scope” each run to
just one block of code as follows:
with mlflow.start_run(): mlflow.log_parameter("x", 1) mlflow.log_metric("y", 2) ...
The run will remain open throughout the
with statement, and will automatically be closed when the
statement exits, even if it exits due to an exception.
Organizing Runs in Experiments¶
MLflow allows for grouping runs under experiments, which can be useful for comparing runs intended
to tackle a particular task. You can create experiments via the CLI (
mlflow experiments) or via
create_experiment() Python API. The experiment ID for a individual run can be passed
via the CLI (e.g.
mlflow run ... --experiment-id [ID]) or via the
# Prints "created an experiment with ID <id> mlflow experiments create fraud-detection # Set the ID via environment variables export MLFLOW_EXPERIMENT_ID=<id>
# Launch a run. The experiment ID is inferred from the MLFLOW_EXPERIMENT_ID environment # variable, or from the --experiment-id parameter passed to the Databricks CLI (the latter # taking precedence) with mlflow.start_run(): mlflow.log_parameter("a", 1) mlflow.log_metric("b", 2)
The Tracking UI lets you visualize, search and compare runs, as well as download run artifacts or
metadata for analysis in other tools. If you have been logging runs to a local
mlflow ui in the directory above it, and it will load the corresponding runs.
Alternatively, the MLflow server serves the same UI.
The UI contains the following key features:
- Experiment-based run listing and comparison
- Searching for runs by parameter or metric value
- Visualizing run metrics
- Downloading run results
Querying Runs Programmatically¶
All of the functions in the Tracking UI can be accessed programmatically through the
mlflow.tracking module and the REST API. This makes it easy to do several
- Query and compare runs using any data analysis tool of your choice, for example, pandas.
- Determine the artifact URI for a run to feed some of its artifacts into a new run when executing a workflow.
- Load artifacts from past runs as MLflow Models.
- Run automated parameter search algorithms, where you query the metrics from various runs to submit new ones.
Running a Tracking Server¶
The MLflow tracking server launched via
mlflow ui also hosts REST APIs for tracking runs,
writing data to the local filesystem. You can specify a tracking server URI
MLFLOW_TRACKING_URI environment variable and MLflow’s tracking APIs will automatically
communicate with the tracking server at that URI to create/get run information, log metrics, etc.
For example, to launch a run against a local tracking server, launch
mlflow ui, set
http://localhost:5000, and run:
import mlflow with mlflow.start_run(): mlflow.log_metric("a", 1)
mlflow.log_metric calls will make API requests to your local