This directory provides a Docker Compose setup for running MLflow locally with a PostgreSQL backend store and MinIO (S3-compatible) artifact storage. It's intended for quick evaluation and local development.
- MLflow Tracking Server — exposed on your host (default
http://localhost:5000). - PostgreSQL — persists MLflow's metadata (experiments, runs, params, metrics).
- MinIO — stores run artifacts via an S3-compatible API.
Compose automatically reads configuration from a local .env file in this directory.
- Git
- Docker and Docker Compose
- Windows/macOS: Docker Desktop
- Linux: Docker Engine + the
docker composeplugin
Verify your setup:
docker --version
docker compose versiongit clone https://github.com/mlflow/mlflow.git
cd docker-composeCopy the example environment file and modify as needed:
cp .env.dev.example .envThe .env file defines container image tags, ports, credentials, and storage configuration. Open it and review values before starting the stack.
Common variables :
- MLflow
MLFLOW_PORT=5000— host port for the MLflow UI/APIMLFLOW_DEFAULT_ARTIFACT_ROOT=s3://mlflow/— artifact store URIMLFLOW_S3_ENDPOINT_URL=http://minio:9000— S3 endpoint (inside the Compose network)
- PostgreSQL
POSTGRES_USER=mlflowPOSTGRES_PASSWORD=mlflowPOSTGRES_DB=mlflow
- MinIO (S3-compatible)
MINIO_ROOT_USER=minioMINIO_ROOT_PASSWORD=minio123MINIO_HOST=minioMINIO_PORT=9000MINIO_BUCKET=mlflow
docker compose up -dThis:
- Builds/pulls images as needed
- Creates a user-defined network
- Starts postgres, minio, and mlflow containers
Check status:
docker compose psView logs (useful on first run):
docker compose logs -fOpen the MLflow UI:
- URL:
http://localhost:5000(or the port set in.env)
You can now create experiments, run training scripts, and log metrics, parameters, and artifacts to this local MLflow instance.
To stop and remove the containers and network:
docker compose downData is preserved in Docker volumes. To remove volumes as well (irreversible), run:
docker compose down -v
-
Verify connectivity
If MLflow can't write artifacts, confirm your S3 settings:MLFLOW_DEFAULT_ARTIFACT_ROOTpoints to your MinIO bucket (e.g.,s3://mlflow/)MLFLOW_S3_ENDPOINT_URLis reachable from the MLflow container (oftenhttp://minio:9000)
-
Resetting the environment
If you want a clean slate, stop the stack and remove volumes:docker compose down -v docker compose up -d
-
Logs
- MLflow server:
docker compose logs -f mlflow - PostgreSQL:
docker compose logs -f postgres - MinIO:
docker compose logs -f minio
- MLflow server:
-
Port conflicts
If5000(or any other port) is in use, change it in.envand restart:docker compose down docker compose up -d
- MLflow uses PostgreSQL as the backend store for experiment/run metadata.
- MLflow uses MinIO as the artifact store via S3 APIs.
- Docker Compose wires services on a shared network; MLflow talks to PostgreSQL and MinIO by container name (e.g.,
postgres,minio).
- Point your training scripts to this server:
export MLFLOW_TRACKING_URI=http://localhost:5000 - Start logging runs with
mlflow.start_run()(Python) or the MLflow CLI. - Customize the
.envanddocker-compose.ymlto fit your local workflow (e.g., change image tags, add volumes, etc.).
You now have a fully local MLflow stack with persistent metadata and artifact storage—ideal for development and experimentation.