Skip to main content

OpenAI Codex + MLflow AI Gateway

Route OpenAI Codex through the MLflow AI Gateway to get centralized tracing and observability, while each developer authenticates with their own OpenAI subscription.

Prerequisites

  • MLflow server running with a SQL backend (mlflow server --port 5000)
  • Codex installed (npm install -g @openai/codex)

Step 1: Create an OpenAI Endpoint

Navigate to the AI Gateway tab at http://localhost:5000/#/gateway and click Create Endpoint.

  • Provider: OpenAI
  • Model: gpt-5 (or your preferred model)
  • Endpoint name: choose a name, e.g. my-codex-endpoint
  • LLM Connection: select an existing connection or create a new one (see Create an LLM Connection)
tip

The server-side API key in the LLM Connection can be set to a dummy value (e.g. dummy). The gateway detects Codex's User-Agent and forwards the client's own credentials, either their OpenAI subscription or their own API key, to the upstream provider instead.

Step 2: Configure Environment Variables

Set the following environment variable so Codex routes through the gateway:

bash
export OPENAI_BASE_URL="http://localhost:5000/gateway/openai/v1"

Step 3: Run Codex

Pass your endpoint name as the model:

bash
codex --model my-codex-endpoint

Codex authenticates using your existing OpenAI credentials and all requests are proxied through the gateway.

What You Get

Every session is captured as an MLflow trace. Open the Logs tab in the MLflow UI to inspect inputs, outputs, token usage, and latency for every request.

Codex trace in MLflow