Working with Kubernetes for container orchestration

Introduction

Kubernetes is a container orchestration system for automating deployment, scaling, and management of containerized applications. By using Kubernetes, you can easily deploy your application on a cluster of servers, and manage it automatically using Kubernetes’ rich set of APIs. In this tutorial, we will learn how to work with Kubernetes for container orchestration.

Prerequisites

Before we start, we need to make sure that our environment is set up correctly. Here are the things you need to have installed in the system:

  • Docker
  • Kubernetes
  • kubectl

You can check if these are installed by running the following commands:

$ docker --version
$ kubectl version --client

Creating a Kubernetes Cluster

Assuming that we have Docker and kubectl installed, the next thing we need to do is to create a Kubernetes cluster. A Kubernetes cluster is a group of servers that are running Kubernetes. In this tutorial, we will use Minikube to create a single-node Kubernetes cluster on our local machine.

To install Minikube, you can go to the Minikube documentation and follow the steps for your operating system. Once you have it installed, you can start the cluster by running the following command:

$ minikube start

This command will start a Kubernetes cluster on your local machine. You can confirm that the cluster is up and running by running the following command:

$ kubectl get nodes

This command should output the name of the node that is running the Kubernetes cluster.

Deploying an Application to Kubernetes

Now that we have a running Kubernetes cluster, let’s deploy an application to it. We will use a simple Flask application as an example.

First, let’s create a Docker image of the application. To do this, we need to create a Dockerfile. Here is an example Dockerfile:

# Dockerfile
FROM python:3.9-slim-buster

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

EXPOSE 5000

CMD ["python", "app.py"]

This Dockerfile installs Python and copies the application files to the container. It also exposes port 5000 and runs the application when the container starts.

To build the Docker image, you can run the following command:

$ docker build -t my-flask-app .

This command builds a Docker image with the tag my-flask-app.

Once the image is built, we can deploy it to Kubernetes. To do this, we need to create a Kubernetes deployment. A deployment creates and manages a set of identical pods running the same container image.

Here is an example deployment configuration file:

# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-flask-app
  labels:
    app: my-flask-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-flask-app
  template:
    metadata:
      labels:
        app: my-flask-app
    spec:
      containers:
        - name: my-flask-app
          image: my-flask-app
          ports:
            - containerPort: 5000

This configuration file tells Kubernetes to create a deployment with one replica of the my-flask-app container image. It also exposes port 5000 from the container.

To deploy the application, we can run the following command:

$ kubectl apply -f deployment.yml

This command creates a new Kubernetes deployment from the configuration file.

To check the status of the deployment, we can run the following command:

$ kubectl get deployment my-flask-app

This command should output the status of the deployment, including the number of replicas that are running.

Exposing the Application

Now that we have deployed the application to Kubernetes, it’s time to expose it to the outside world. We can do this by creating a Kubernetes service. A service is an abstraction that defines a logical set of pods and a policy by which to access them.

Here is an example service configuration file:

# service.yml
apiVersion: v1
kind: Service
metadata:
  name: my-flask-app
spec:
  selector:
    app: my-flask-app
  ports:
    - name: http
      port: 80
      targetPort: 5000
  type: LoadBalancer

This configuration file tells Kubernetes to create a service called my-flask-app that selects pods with the label app=my-flask-app. It also exposes port 80 to the outside world, and routes traffic to port 5000 on the pods.

To create the service, we can run the following command:

$ kubectl apply -f service.yml

This command creates a new Kubernetes service from the configuration file.

To get the external IP address of the service, we can run the following command:

$ minikube service my-flask-app --url

This command should output the external URL of the service. You can open this URL in a web browser to access the application.

Scaling the Application

Now that we have deployed the application and exposed it to the outside world, let’s try scaling the application. Kubernetes makes it easy to scale our application up or down, depending on the workload.

To scale the application, we can update the deployment configuration file and increase the number of replicas. Here is an updated deployment configuration file:

# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-flask-app
  labels:
    app: my-flask-app
spec:
  replicas: 3 # update replicas to 3
  selector:
    matchLabels:
      app: my-flask-app
  template:
    metadata:
      labels:
        app: my-flask-app
    spec:
      containers:
        - name: my-flask-app
          image: my-flask-app
          ports:
            - containerPort: 5000

Once we have updated the deployment configuration file, we can apply the changes by running the following command:

$ kubectl apply -f deployment.yml

This command updates the existing deployment with the new configuration.

To check the status of the deployment, we can run the following command:

$ kubectl get deployment my-flask-app

This command should output the updated status of the deployment, including the number of replicas that are running.

Updating the Application

Finally, let’s learn how to update the application. Kubernetes makes it easy to deploy new versions of our application without any downtime.

To update the application, we need to update the Docker image and create a new deployment. Here is an updated Dockerfile with a new version of the application:

# Dockerfile
FROM python:3.9-slim-buster

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

EXPOSE 5000

CMD ["python", "app.py"]

This Dockerfile updates the application code.

To build the new Docker image, we can run the following command:

$ docker build -t my-flask-app:v2 .

This command builds a new Docker image with the tag my-flask-app:v2.

To create a new deployment with the new Docker image, we need to update the existing deployment configuration file. Here is an updated deployment configuration file:

# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-flask-app
  labels:
    app: my-flask-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-flask-app
  template:
    metadata:
      labels:
        app: my-flask-app
    spec:
      containers:
        - name: my-flask-app
          image: my-flask-app:v2 # update the image tag
          ports:
            - containerPort: 5000

Once we have updated the deployment configuration file, we can apply the changes by running the following command:

$ kubectl apply -f deployment.yml

This command creates a new deployment with the updated Docker image.

To check the status of the deployment, we can run the following command:

$ kubectl get deployment my-flask-app

This command should output the updated status of the deployment, including the number of replicas that are running with the new version of the application.

Conclusion

In this tutorial, we have learned how to work with Kubernetes for container orchestration. We have created a Kubernetes cluster, deployed a Flask application to it, exposed the application to the outside world, scaled the application, and updated the application with a new version, all using Kubernetes’ rich set of APIs. Kubernetes is a powerful tool for managing containerized applications, and it can help automate deployment, scaling, and management of your applications. It’s definitely worth learning more about Kubernetes if you are working with containerized applications.

Related Post