Containerization with Docker and Kubernetes

Containerization with Docker and Kubernetes – II

Share this post on:

PART-2

What is Kubernetes?

Consider that you have a bunch of containers that you have to deploy and maintain. It’s like keeping track of dozens of spinning plates at once. You must pick up and begin spinning a plate again if it falls and suppose you get more plates, then you have to start spinning them, too. Exhausting right?

Kubernetes is an open-source platform for automating containerized application deployment, scaling, and management. No matter how many or where you have your apps open, it handles all the heavy lifting and makes sure they function smoothly.

Key Features of Kubernetes:

  • Automated Scheduling: Kubernetes automatically arranges your containers according to resource requirements and other limitations. It makes sure that every container operates on the cluster’s best node, maximizing resource utilization.
  • Self-Healing Capabilities: Kubernetes is able to repair itself. It will automatically restart the unsuccessful containers and reschedule them on healthy nodes in the case of a container crash or a node failure. Additionally, It kills the containers that fail to respond to health checks.
  • Horizontal Scaling: Kubernetes simplifies scaling. It can automatically change the number of containers running based on CPU or memory usage. It ensures that your application can handle different loads without the need for manual intervention.

Kubernetes Concepts:

  • Nodes and Clusters: A Kubernetes cluster consists of multiple nodes. A node is a physical or virtual machine that runs your containers. The cluster is managed by the Kubernetes control plane, which schedules and orchestrates the containers.
  • Pods: The smallest and simplest Kubernetes object. A pod is a group of one or more containers with shared storage and network resources. They are the basic units of deployment in Kubernetes.
  • Services: A service in Kubernetes is an abstraction that defines a logical set of pods and a policy to access them. It enables load balancing and service discovery, making it easy for your containers to interact.
  • Deployments: A deployment in Kubernetes allows you to manage a set of identical pods. It provides declarative updates to applications, ensuring that the desired state is maintained and changes are rolled out consistently.

Get started with Kubernetes on Windows:

Step 1: Install Kubernetes on Windows

  • Install kubectl
    Download kubectl from the official Kubernetes website.
    Follow the installation instructions for Windows.
  • Verify kubectl configuration
    Check the status of a cluster using kubectl:
kubectl cluster-info
  • List the nodes in your cluster:
kubectl get nodes

Deploying Applications on Kubernetes:

  • Create a simple deployment using kubectl:
kubectl create deployment hello-node --
image=k8s.gcr.io/echoserver:1.4
  • Expose your deployment to make it accessible:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
  • Get the url of your service:
minikube service hello-node --url

How do Docker and Kubernetes go together?

Docker packages applications into containers, making them portable and consistent. Kubernetes takes these Docker containers and manages them at scale, providing deployment, scaling, and monitoring.

Deploying Docker Containers in Kubernetes:

  • Create Docker Image:
    We already have created Docker Image in the first half of this blog named my-node-app.
    Push the image to the Registry (e.g. Docker Hub):
docker tag my-node-app <your-username>/my-node-app
docker push <username>/my-node-app\

Create a Deployment YAML File:

  • In your root directory create a file named deployment.yaml and add the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-node-app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-node-app
  template:
    metadata:
labels:
        app: my-node-app
    spec:
      containers:
      - name: my-node-app
        image: <your-username>/my-node-app
        ports:
        - containerPort: 3000
  • Deploy the service and kubernetes files using kubectl:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
  • Expose the deployment to make it accessible:
kubectl expose deployment my-node-app-deployment 
--type=LoadBalancer --port=8080

Kubernetes Important Commands:

  • kubectl cluster-info:
  • kubectl get nodes:
    • Lists all nodes in the cluster.
  • kubectl create deployment <name> –image=<image>:
    • Creates a new deployment.
  • kubectl expose deployment <name> –type=LoadBalancer –port=<port>:
    • Exposes a deployment, making it accessible outside the cluster.
  • kubectl get services:
    Lists all services in the cluster.