{"id":4176,"date":"2023-11-04T23:14:07","date_gmt":"2023-11-04T23:14:07","guid":{"rendered":"http:\/\/localhost:10003\/deploying-a-machine-learning-model-with-azure-functions-and-azure-ml\/"},"modified":"2023-11-05T05:47:57","modified_gmt":"2023-11-05T05:47:57","slug":"deploying-a-machine-learning-model-with-azure-functions-and-azure-ml","status":"publish","type":"post","link":"http:\/\/localhost:10003\/deploying-a-machine-learning-model-with-azure-functions-and-azure-ml\/","title":{"rendered":"Deploying a machine learning model with Azure Functions and Azure ML"},"content":{"rendered":"
Machine learning is a powerful tool that can be used to build predictive models and automate decision-making processes in a variety of applications. However, deploying these models can often be a challenging task. In this tutorial, we will explore how to deploy a machine learning model using Azure Functions and Azure ML.<\/p>\n
Azure Functions is a serverless computing platform provided by Microsoft that allows developers to run small pieces of code in the cloud without worrying about infrastructure or server management. Azure Functions enables developers to create event-driven, scalable, and cost-effective solutions that can handle various tasks such as data processing, automation, and integration with other services.<\/p>\n
Azure Functions can be integrated with various triggers such as HTTP requests, service bus messages, timer schedules, and many more. Developers can write their code using different languages such as C#, Java, JavaScript, and Python.<\/p>\n
Azure Machine Learning (Azure ML) is a cloud-based platform provided by Microsoft that enables developers to build, train, deploy, and manage machine learning models at scale.<\/p>\n
Azure ML provides a variety of tools and services to support the machine learning lifecycle, including data preparation, model training, hyperparameter tuning, model selection, deployment, and management. Developers can use familiar programming languages and tools such as Python, Jupyter Notebook, and Visual Studio Code to create and run machine learning experiments.<\/p>\n
Azure ML also enables developers to easily deploy machine learning models to production using various deployment targets such as Azure Kubernetes Service (AKS), Azure Functions, and Azure Container Instances (ACI).<\/p>\n
Before we proceed with this tutorial, we need to have the following prerequisites:<\/p>\n
Now that we have our prerequisites set up, we can proceed with the following steps to deploy a machine learning model with Azure Functions and Azure ML:<\/p>\n
The first step is to create a workspace in Azure ML. A workspace is a container that holds all the assets related to your machine learning experiments, including datasets, models, scripts, and metadata.<\/p>\n
To create a workspace, log in to Azure Portal and follow the steps below:<\/p>\n
After the workspace is successfully created, you will be redirected to the Azure Machine Learning workspace dashboard.<\/p>\n
The next step is to create a machine learning model using Azure ML. For this tutorial, we will use the Scikit-learn library to build a simple binary classification model that predicts whether a customer will purchase a product or not based on their age and income.<\/p>\n
Create a new Python file named This code generates a synthetic dataset using the kit-learn, evaluates the model performance on the testing set using the To run this code as an experiment in Azure ML, we need to create an experiment and a run configuration.<\/p>\n Create a new Python file named This code creates an experiment object using the To run this code, execute the following terminal command:<\/p>\n This will create an experiment with the name After the experiment run is completed, you can view the run details and the generated artifacts in the Azure ML workspace portal.<\/p>\n The final step is to deploy the machine learning model using Azure Functions. For this tutorial, we will use Python and Visual Studio Code to create an Azure Functions app that loads the serialized model and deploys it as a REST API.<\/p>\n Follow these steps to create an Azure Functions app:<\/p>\n Navigate to the Replace the contents of the file with the following code:<\/p>\n<\/li>\n<\/ol>\n This code defines a new Azure Functions endpoint named g the Execute the following terminal command to create a new virtual environment for the Azure Functions app and install the necessary packages:<\/p>\n<\/li>\n<\/ol>\n This will start the Azure Functions app and open a local endpoint at `http:\/\/localhost:7071\/api\/predict`.<\/p>\n The response should be a JSON object with a You have successfully deployed a machine learning model with Azure Functions and Azure ML. You can now use this process to deploy your own machine learning models and automate decision-making processes in your applications.<\/p>\n","protected":false},"excerpt":{"rendered":" Machine learning is a powerful tool that can be used to build predictive models and automate decision-making processes in a variety of applications. However, deploying these models can often be a challenging task. In this tutorial, we will explore how to deploy a machine learning model using Azure Functions and Continue Reading<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[1],"tags":[207,39,585,1565,30,325,464,690,41,1566,424,1567],"yoast_head":"\ntrain.py<\/code> and copy the following code:<\/p>\n
import json\nimport joblib\nimport numpy as np\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier\nfrom azureml.core import Run\n\n# Get the experiment run context\nrun = Run.get_context()\n\n# Generate a synthetic dataset\nX, y = make_classification(n_samples=1000, n_features=2, n_informative=2,\n n_redundant=0, n_classes=2, random_state=42)\n\n# Split the data into training and testing sets\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\n# Train a random forest classifier on the training set\nclf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)\nclf.fit(X_train, y_train)\n\n# Evaluate the model performance on the testing set\naccuracy = clf.score(X_test, y_test)\nprint(f\"Accuracy: {accuracy:.2f}\")\n\n# Serialize the model and save it to disk\nmodel_file = \"model.pkl\"\njoblib.dump(value=clf, filename=model_file)\n\n# Upload the serialized model as an artifact of the experiment run\nrun.upload_file(name=model_file, path_or_stream=model_file)\n\n# Log the model performance and a JSON representation of the hyperparameters\nrun.log(\"Accuracy\", np.float(accuracy))\nrun.log(\"Hyperparameters\", json.dumps({\"n_estimators\": 100, \"max_depth\": 2, \"random_state\": 0}))\n<\/code><\/pre>\n
make_classification<\/code> function, splits it into training and testing sets using the
train_test_split<\/code> function, trains a random forest classifier on the training set using the
RandomForestClassifier<\/code> class from Sci<\/p>\n
score<\/code> method, saves the trained model to disk using the
joblib<\/code> module, and uploads the serialized model as an artifact of the experiment run using the
upload_file<\/code> method.<\/p>\n
create_experiment.py<\/code> and copy the following code:<\/p>\n
from azureml.core import Experiment, Environment, ScriptRunConfig\nfrom azureml.core.conda_dependencies import CondaDependencies\n\n# Define the name and workspace of the experiment\nexperiment_name = \"purchase-prediction-experiment\"\nworkspace_name = \"ml-workspace\"\n\n# Create an Azure ML experiment\nexperiment = Experiment(workspace=workspace_name, name=experiment_name)\n\n# Create a new environment and add dependencies to it\nenv = Environment(\"purchase-prediction-env\")\ncd = CondaDependencies.create(conda_packages=[\"scikit-learn\", \"joblib\"])\nenv.python.conda_dependencies = cd\n\n# Create a ScriptRunConfig that references the training script and the new environment\nsrc = ScriptRunConfig(source_directory=\".\",\n script=\"train.py\",\n environment=env)\n\n# Submit the experiment run to Azure ML\nrun = experiment.submit(src)\n<\/code><\/pre>\n
Experiment<\/code> class from Azure ML, creates a new environment object using the
Environment<\/code> class from Azure ML, adds the necessary dependencies to it using the
CondaDependencies<\/code> class, creates a script run configuration object using the
ScriptRunConfig<\/code> class from Azure ML, and submits the experiment run to Azure ML using the
submit<\/code> method.<\/p>\n
python create_experiment.py\n<\/code><\/pre>\n
purchase-prediction-experiment<\/code> in the Azure ML workspace named
ml-workspace<\/code>, and run the
train.py<\/code> script inside a new environment named
purchase-prediction-env<\/code>.<\/p>\n
Step 3: Deploy the Machine Learning Model with Azure Functions<\/h3>\n
\n
azure-functions-app<\/code>.<\/li>\n
azure-functions-app<\/code> folder.<\/li>\n
python -m venv .venv\nsource .venv\/bin\/activate\n<\/code><\/pre>\n
\n
npm install -g azure-functions-core-tools@3 --unsafe-perm true\n<\/code><\/pre>\n
\n
func init --worker-runtime python\n<\/code><\/pre>\n
\n
predict<\/code> as the name of the function.<\/p>\n<\/li>\n
predict<\/code> folder and open the
__init__.py<\/code> file.<\/p>\n<\/li>\n
import logging\nimport json\nimport joblib\nimport azure.functions as func\n\ndef main(req: func.HttpRequest) -> func.HttpResponse:\n logging.info('Python HTTP trigger function processed a request.')\n\n # Load the serialized model from disk\n model_file = \"model.pkl\"\n clf = joblib.load(model_file)\n\n # Get the request body and convert it to a JSON object\n req_body = req.get_body()\n data = json.loads(req_body)\n\n # Extract the features from the JSON object and make a prediction\n age = data.get(\"age\")\n income = data.get(\"income\")\n features = [[age, income]]\n prediction = clf.predict(features)[0]\n\n # Return the prediction as a JSON response\n response = {\"prediction\": prediction}\n return func.HttpResponse(json.dumps(response))\n<\/code><\/pre>\n
predict<\/code>, loads the serialized model from disk using the
joblib<\/code> module, gets the request body and converts it to a JSON object using the
get_body<\/code> and
json.loads<\/code> methods, extracts the features from the JSON object, makes a prediction usin<\/p>\n
predict<\/code> method of the trained model, and returns the prediction as a JSON response using the
json.dumps<\/code> method.<\/p>\n
\n
__init__.py<\/code> file and navigate back to the
azure-functions-app<\/code> folder.<\/p>\n<\/li>\n
python -m venv .venv\nsource .venv\/bin\/activate\npip install azure-functions\npip install joblib==1.0.1 scikit-learn==0.24.2\n<\/code><\/pre>\n
\n
func start\n<\/code><\/pre>\n
\n
{\n \"age\": 30,\n \"income\": 50000\n}\n<\/code><\/pre>\n
prediction<\/code> field that indicates whether the customer is likely to purchase the product or not.<\/p>\n
Congratulations!<\/h3>\n