Today at the PyTorch Developer Day, Facebook’s AI and PyTorch engineering team, in collaboration with Databricks’ MLflow engineering team, announced an extended PyTorch integration with MLflow. This joint engineering investment and integration offers PyTorch developers an “end-to-end exploration to production platform for PyTorch” with MLflow. As part of MLflow 1.12.0, this extended functionality includes:

  • Autologging for PyTorch Lightning models: Call mlflow.pytorch.autolog() to enable automatic logging of metrics, parameters, and models from PyTorch Lightning model training.
  • Logging, Loading, and Serving TorchScript models: mlflow.pytorch.log_model and mlflow.pytorch.load_model now support logging and loading TorchScript models, which can then be leveraged for inference via MLflow’s built-in model deployment tools
  • Deploying PyTorch models to TorchServe: MLflow now supports deploying logged Pytorch models to TorchServe for performant, real-time inference via the MLflow Torchserve plugin

Install MLflow 1.12 and the TorchServe plugin to try these new features!