mlflow.sagemaker
The mlflow.sagemaker
module provides an API for deploying MLflow models to Amazon SageMaker.
-
mlflow.sagemaker.
delete
(app_name, region_name='us-west-2', archive=False, synchronous=True, timeout_seconds=300) Delete a SageMaker application.
Parameters: - app_name – Name of the deployed application.
- region_name – Name of the AWS region in which the application is deployed.
- archive – If
True
, resources associated with the specified application, such as its associated models and endpoint configuration, are preserved. IfFalse
, these resources are deleted. In order to usearchive=False
,delete()
must be executed synchronously withsynchronous=True
. - synchronous – If True, this function blocks until the deletion process succeeds or encounters an irrecoverable failure. If False, this function returns immediately after starting the deletion process. It will not wait for the deletion process to complete; in this case, the caller is responsible for monitoring the status of the deletion process via native SageMaker APIs or the AWS console.
- timeout_seconds – If synchronous is True, the deletion process returns after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the status of the deletion process via native SageMaker APIs or the AWS console. If synchronous is False, this parameter is ignored.
-
mlflow.sagemaker.
deploy
(app_name, model_uri, execution_role_arn=None, bucket=None, image_url=None, region_name='us-west-2', mode='create', archive=False, instance_type='ml.m4.xlarge', instance_count=1, vpc_config=None, flavor=None, synchronous=True, timeout_seconds=1200) Deploy an MLflow model on AWS SageMaker. The currently active AWS account must have correct permissions set up.
This function creates a SageMaker endpoint. For more information about the input data formats accepted by this endpoint, see the MLflow deployment tools documentation.
Parameters: - app_name – Name of the deployed application.
- model_uri –
The location, in URI format, of the MLflow model to deploy to SageMaker. For example:
/Users/me/path/to/local/model
relative/path/to/local/model
s3://my_bucket/path/to/model
runs:/<mlflow_run_id>/run-relative/path/to/model
For more information about supported URI schemes, see Referencing Artifacts.
- execution_role_arn – The name of an IAM role granting the SageMaker service permissions to
access the specified Docker image and S3 bucket containing MLflow
model artifacts. If unspecified, the currently-assumed role will be
used. This execution role is passed to the SageMaker service when
creating a SageMaker model from the specified MLflow model. It is
passed as the
ExecutionRoleArn
parameter of the SageMaker CreateModel API call. This role is not assumed for any other call. For more information about SageMaker execution roles for model creation, see https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html. - bucket – S3 bucket where model artifacts will be stored. Defaults to a SageMaker-compatible bucket name.
- image_url – URL of the ECR-hosted Docker image the model should be deployed into, produced
by
mlflow sagemaker build-and-push-container
. This parameter can also be specified by the environment variableMLFLOW_SAGEMAKER_DEPLOY_IMG_URL
. - region_name – Name of the AWS region to which to deploy the application.
- mode –
The mode in which to deploy the application. Must be one of the following:
mlflow.sagemaker.DEPLOYMENT_MODE_CREATE
- Create an application with the specified name and model. This fails if an application of the same name already exists.
mlflow.sagemaker.DEPLOYMENT_MODE_REPLACE
- If an application of the specified name exists, its model(s) is replaced with the specified model. If no such application exists, it is created with the specified name and model.
mlflow.sagemaker.DEPLOYMENT_MODE_ADD
- Add the specified model to a pre-existing application with the specified name,
if one exists. If the application does not exist, a new application is created
with the specified name and model. NOTE: If the application already exists,
the specified model is added to the application’s corresponding SageMaker
endpoint with an initial weight of zero (0). To route traffic to the model,
update the application’s associated endpoint configuration using either the
AWS console or the
UpdateEndpointWeightsAndCapacities
function defined in https://docs.aws.amazon.com/sagemaker/latest/dg/API_UpdateEndpointWeightsAndCapacities.html.
- archive – If
True
, any pre-existing SageMaker application resources that become inactive (i.e. as a result of deploying inmlflow.sagemaker.DEPLOYMENT_MODE_REPLACE
mode) are preserved. These resources may include unused SageMaker models and endpoint configurations that were associated with a prior version of the application endpoint. IfFalse
, these resources are deleted. In order to usearchive=False
,deploy()
must be executed synchronously withsynchronous=True
. - instance_type – The type of SageMaker ML instance on which to deploy the model. For a list of supported instance types, see https://aws.amazon.com/sagemaker/pricing/instance-types/.
- instance_count – The number of SageMaker ML instances on which to deploy the model.
- vpc_config –
A dictionary specifying the VPC configuration to use when creating the new SageMaker model associated with this application. The acceptable values for this parameter are identical to those of the
VpcConfig
parameter in the SageMaker boto3 client (https://boto3.readthedocs.io/en/latest/reference/ services/sagemaker.html#SageMaker.Client.create_model). For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html.Example:
>>> import mlflow.sagemaker as mfs >>> vpc_config = { ... 'SecurityGroupIds': [ ... 'sg-123456abc', ... ], ... 'Subnets': [ ... 'subnet-123456abc', ... ] ... } >>> mfs.deploy(..., vpc_config=vpc_config)
- flavor – The name of the flavor of the model to use for deployment. Must be either
None
or one of mlflow.sagemaker.SUPPORTED_DEPLOYMENT_FLAVORS. IfNone
, a flavor is automatically selected from the model’s available flavors. If the specified flavor is not present or not supported for deployment, an exception will be thrown. - synchronous – If
True
, this function will block until the deployment process succeeds or encounters an irrecoverable failure. IfFalse
, this function will return immediately after starting the deployment process. It will not wait for the deployment process to complete; in this case, the caller is responsible for monitoring the health and status of the pending deployment via native SageMaker APIs or the AWS console. - timeout_seconds – If
synchronous
isTrue
, the deployment process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending deployment using native SageMaker APIs or the AWS console. Ifsynchronous
isFalse
, this parameter is ignored.
-
mlflow.sagemaker.
push_image_to_ecr
(image='mlflow-pyfunc') Push local Docker image to AWS ECR.
The image is pushed under currently active AWS account and to the currently active AWS region.
Parameters: image – Docker image name.
-
mlflow.sagemaker.
run_local
(model_uri, port=5000, image='mlflow-pyfunc', flavor=None) Serve model locally in a SageMaker compatible Docker container.
Parameters: - model_uri –
The location, in URI format, of the MLflow model to serve locally, for example:
/Users/me/path/to/local/model
relative/path/to/local/model
s3://my_bucket/path/to/model
runs:/<mlflow_run_id>/run-relative/path/to/model
For more information about supported URI schemes, see Referencing Artifacts.
- port – Local port.
- image – Name of the Docker image to be used.
- flavor – The name of the flavor of the model to use for local serving. If
None
, a flavor is automatically selected from the model’s available flavors. If the specified flavor is not present or not supported for deployment, an exception is thrown.
- model_uri –