Introduction
In today’s data-intensive world, machine learning models can be employed to create value for businesses and individuals alike. With the advent of cloud computing and services such as Azure Machine Learning Service, building and deploying machine learning models has become easier than ever. In this tutorial, we will walk through the steps for deploying a machine learning model to Azure Machine Learning Service.
Prerequisites
Before we begin, ensure you have access to an Azure subscription and have installed the Azure Machine Learning Python SDK.
Step 1: Set up Azure Machine Learning workspace
An Azure Machine Learning workspace provides a centralized location to manage data, compute resources, and experiments. To create a workspace, follow these steps:
- Sign in to Azure portal and click “Create a resource”.
- Search for “Machine Learning” and select “Machine Learning” (preview) under the “AI + Machine Learning” category.
- Click “Create” and fill in the required fields. You can create a new resource group or select an existing one.
- Under “Workspace configuration”, select your subscription and a region to create the workspace in. Ensure “Azure Machine Learning” is selected under “Edition” and set “Workspace name” to a unique name.
- Click “Review + create” and then “Create” to create the workspace.
Step 2: Prepare the machine learning model
Before we can deploy a machine learning model, we must first prepare it for deployment. In this example, we will use a pre-trained model for image classification. Specifically, we will use the ResNet50 model pre-trained on the ImageNet dataset.
- Install the necessary Python libraries: tensorflow, keras, and numpy.
- Load the pre-trained ResNet50 model using the Keras library.
- Convert the Keras model to an ONNX format using the onnx-keras library.
Step 3: Create a scoring script
A scoring script is used to output the predictions made by the machine learning model. In this example, we will output the predicted class label and the probability for each prediction.
- Create a file called “score.py”.
- Define a function called “init()” that will load the ONNX model and its associated metadata.
- Define a function called “run(input_data)” that will accept input data, preprocess it, and use the loaded ONNX model to make a prediction.
- Define a variable called “output” that will contain the predicted class label and probability for each prediction.
- Return “output” in the “run()” function.
Step 4: Create an environment file
An environment file specifies the dependencies required by the scoring script and the compute environment on which the script will run. In this example, we will use a Python 3.6 environment.
- Create a file called “myenv.yml”.
- Specify
the Python version and the necessary dependencies. In this example, we will require tensorflow, onnxruntime, and numpy.
name: myenv
dependencies:
- python=3.6
- pip
- tensorflow
- onnxruntime
- numpy
Step 5: Define the deployment configuration
The deployment configuration specifies the compute target, entry script, environment file, and other necessary settings for deploying the machine learning model. In this example, we will use the Azure Kubernetes Service (AKS) as the compute target.
- Define the deployment configuration using the Azure Machine Learning SDK.
from azureml.core.model import InferenceConfig, Model
from azureml.core.webservice import AksWebservice
from azureml.core.compute import AksCompute
from azureml.core.compute_target import ComputeTargetException
aks_name = "myaks"
aks_resource_group = "myresourcegroup"
aks_location = "westus2"
model_name = "myonnxmodel"
deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
- Define the AKS compute target.
try:
aks_target = AksCompute(workspace=ws, name=aks_name)
print("Found existing AKS service:", aks_name)
except ComputeTargetException:
print("Creating a new AKS service...")
prov_config = AksCompute.provisioning_configuration(location=aks_location, resource_group=aks_resource_group)
aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config)
aks_target.wait_for_completion(show_output=True)
- Define the input and output data types for the scoring script.
input_data = {
'image': 'numpy.ndarray'
}
output_data = {
'output': 'numpy.ndarray'
}
- Define the inference configuration using the scoring script, environment file, and input and output data types.
inference_config = InferenceConfig(
source_directory='.',
entry_script='score.py',
environment_file='myenv.yml',
input_data=input_data,
output_data=output_data
)
- Retrieve the ONNX model from the Azure Machine Learning workspace.
model = Model(ws, name=model_name, version=1)
Step 6: Deploy the machine learning model
Now that we have prepared the machine learning model, scoring script, environment file, and deployment configuration, we can deploy the model to Azure Machine Learning Service using the following command:
aks_service = Model.deploy(ws, models=[model], inference_config=inference_config, deployment_config=deployment_config, deployment_target=aks_target)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
The above command will deploy the machine learning model to the AKS compute target and output its deployment state.
Conclusion
In this tutorial, we have walked through the steps for deploying a machine learning model to Azure Machine Learning Service. By following these steps, you can easily deploy a machine learning model to Azure and make it available for consumption by other applications and services.