Skip to main content

Set Up MLflow Server

MLflow Assistant

Need help with this setup? Try MLflow Assistant - a powerful AI assistant that understands your codebase and can set up MLflow for you.

MLflow is open source, and you can set up the MLflow server using either pip or docker.

Before you can leverage MLflow for your LLM application and AI agent development, you must first start the MLflow server.

Install the Python package manager uv (that will also install uvx command to invoke Python tools without installing them).

Start a MLflow server locally.

shell
uvx mlflow server

This will start the server at port 5000 on your local machine and you can access the MLflow web UI at http://localhost:5000.

MLflow UI Home

If you are looking for more guidance about self-hosting the MLflow server, please see the Self-Hosting Guide for more details.

info

If you are using MLflow on Databricks, please visit this for environment setup instructions specific to Databricks.

Next Step

Now that you have started the MLflow server, let's start tracing your LLM application or AI agent.

Follow this quickstart to send your LLM application or AI agent traces to the MLflow server.