MLflow Tracking Server#
MLflow is an open-source platform for managing machine learning lifecycles, including experimentation, reproducibility, deployment, and a central model registry. The MLflow workspace provides a dedicated tracking server for logging and managing your ML experiments.
Using MLflow after deployment#
Once you have deployed the MLflow workspace following the instructions in the workspaces overview, you can connect to the tracking server using the provided URLs.
Accessing the MLflow UI#
After deployment, the workload will expose two connection endpoints:
external_host: Accessible from outside the clusterinternal_host: Accessible from within the cluster
You can access the MLflow UI directly by clicking the “Launch” button in the Workspaces interface, which will open the external host URL.
Connecting to the tracking server#
To connect your ML experiments to the MLflow tracking server, use the following code snippet:
import mlflow
# Set the tracking URI to connect to your MLflow server
# Replace with your actual external_host URL from the workload output
mlflow.set_tracking_uri("http://your-external-host-url")
# Example: Log a simple experiment
with mlflow.start_run():
# Log parameters
mlflow.log_param("learning_rate", 0.01)
mlflow.log_param("batch_size", 32)
# Log metrics
mlflow.log_metric("accuracy", 0.95)
mlflow.log_metric("loss", 0.05)
# Log artifacts (files)
with open("model_info.txt", "w") as f:
f.write("Model trained successfully")
mlflow.log_artifact("model_info.txt")
Environment variable setup#
Alternatively, you can set the tracking URI as an environment variable:
# Set the MLflow tracking URI
export MLFLOW_TRACKING_URI=http://your-external-host-url
# Now your Python scripts will automatically use this URI
python your_ml_script.py
Using internal host#
If you’re running experiments from within the same cluster (e.g., from a JupyterLab workspace), you can use the internal host for better performance:
import mlflow
# Use internal_host for cluster-internal connections
mlflow.set_tracking_uri("http://your-internal-host-url")
Next steps#
Once connected, you can:
View and compare experiments in the MLflow UI
Register models in the MLflow Model Registry
Deploy models using MLflow’s deployment capabilities
Organize experiments with tags and nested runs