Kubernetes for beginners overview[K8s]

Container Technology

Container technology is a method of packaging an application so it can be run with isolated dependencies. With computer container technology, it is an analogous situation. Ever have the situation where a program runs perfectly great on one machine, but then turns into a clunky mess when it is moved to the next? This has the potential to occur when migrating the software from a developer’s PC to a test server, or a physical server in a company data center, to a cloud server. Issues arise when moving software due to differences between machine environments, such as the installed OS, SSL libraries, storage, security, and network topology.

The container technology contains not only the software, but also the dependencies including libraries, binaries and configuration files, all together, and they get migrated as a unit, avoiding the differences between machines including OS differences and underlying hardware that lead to incompatibilities and crashes.


Docker is an open source containerization platform. It enables developers to package applications into containers—standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

Architecture of docker:

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.

Docker Installation on Ubuntu


swapoff -a && sed -i ‘/swap/d’ /etc/fstab
apt-get update
apt-get install \    
ca-certificates \    
curl \    
gnupg \    
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \   “deb [arch=$(dpkg –print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \   $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io –y
echo “{\”exec-opts\”:[\”native.cgroupdriver=systemd\”]}” >> /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker

Basic commands of Docker:

docker  –version

docker   container  ls

docker   container  ls   -a

docker  image  ls

docker  network   ls

docker   volume   ls

docker  pull   centos:7

docker  run   -it    –name   con1   centos:7

docker  attach  con1

docker  exec   -it   con1   bash

docker  container   stop  con1

docker   container rm  con1

docker   run  -d   –name  con2   httpd

docker  exec  -it  con2   bash

docker   stop  con2

docker   start   con2

docker   pause   con2

docker  unpause  con2

docker   inspect   con2

docker   container   rm   con2

docker   run  -d   –name  con3  -p  80:80   httpd

docker   run   -d   –name   con4  -v  /data:/usr/local/apache2/htdocs   httpd

docker   system  prune

docker   volume  create  nokia_dir

docker   run   -d   –name   con5   nokia_dir:/usr/share/nginx/html   nginx

docker   volume   ls

Orchestration Tool:

Container orchestration tools provide a framework for managing containers and microservices architecture at scale. There are many container orchestration tools that can be used for container lifecycle management. Some popular options are Kubernetes, Docker Swarm, and Apache Mesos.

Container orchestration automates and simplifies provisioning, and deployment and management of containerized applications.

Features of Kubernetes

1.    Automated rollouts and rollbacks

Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.

2.    Storage orchestration

Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.

3.    Automatic bin packing

Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.

4.    Service discovery and load balancing

No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

5.    Secret and configuration management

Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.

6.    Batch execution

In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.

7.    IPv4/IPv6 dual-stack

Allocation of IPv4 and IPv6 addresses to Pods and Services

8.    Self-healing

Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve

9.    Horizontal scaling

Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.

10. Designed for extensibility

Add features to your Kubernetes cluster without changing upstream source code.

Architecture of Kubernetes:

The components of a Kubernetes cluster

Control Plane Components

The control plane’s components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment’s replicas field is unsatisfied).

Control plane components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine. See Creating Highly Available clusters with kubeadm for an example control plane setup that runs across multiple machines.


The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.

The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.


Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.

If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.

You can find in-depth information about etcd in the official documentation.


Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.

Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.


Control plane component that runs controller processes.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

Some types of these controllers are:

  • Node controller: Responsible for noticing and responding when nodes go down.
  • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
  • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.


An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.


kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.

kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.

kube-proxy uses the operating system packet filtering layer if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.


Container runtime

The container runtime is the software that is responsible for running containers.

Kubernetes supports container runtimes such as containerdCRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).


Addons use Kubernetes resources (DaemonSetDeployment, etc) to implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the kube-system namespace.

Selected addons are described below; for an extended list of available addons, please see Addons.


While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.

Containers started by Kubernetes automatically include this DNS server in their DNS searches.

Web UI (Dashboard)

Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.

Container Resource Monitoring

Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.

Cluster-level Logging

cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.

Kubernetes installation on Ubuntu


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl  –system
apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo “deb


https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list

apt-get update

apt-get install -y kubelet=1.21*  kubeadm=1.21*  kubectl=1.21*

apt-mark hold kubelet kubeadm kubectl

Systemctl status kubelet

Mkdir .kube       (On Master Node)

Cp     /etc/kubernetes/admin.conf     .kube/config

Configure CNI Or Network:

To configure network over the cluster, we need to run the following command on Master node only:

kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”

OR you need to go through this link: https://kubernetes.io/docs/concepts/cluster-administration/networking/

Now you have your cluster in ready state.

Uninstall Kubernetes:

kubeadm reset
apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
apt-get autoremove
rm -rf ~/.kube

and restart your machine/server


A Node is a machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. It can be a Master node (control plane) or worker node. Each worker node is managed by the control plane. A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.

Some basic commands of nodes:-

kubectl  get  node

kubectl  get  node  -o  wide

kubectl describe node <node-name>

kubectl label node <node-name>

kubectl  api-resources


namespaces provides a mechanism for isolating groups of resources within a single cluster. Namespaces are a way to divide cluster resources between multiple users. It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.

Some basic commands of namespaces:-

Kubectl  get   ns

Kubectl   create    ns   prod   –dry-run=client   -o yaml    >  prod-ns.yaml

Kubectl   create   -f   prod-ns.yaml

Kubectl  get   ns

Kubectl   label   ns   prod  project=production

Kubectl   describe   ns    prod

Kubectl   get   ns   –show-labels

Kubectl delete   ns   prod


Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources.

Pod Configure:


apiVersion: v1

kind: Pod


  name: nginx


    env: test



  – name: nginx

    image: nginx

    imagePullPolicy: IfNotPresent

kubectl  create   -f   pod-nginx.yaml

kubectl   get   pods


kubectl    run   <pod-name>    –image=image_name

kubectl    run   <pod-name>    –image=image_name   –dry-run=client  -o  yaml  >   pod1.yaml

kubectl   get   pods

kubectl   label   pod   <pod-name>   key=value

kubectl   describe  pod  <pod-name>

kubectl   delete  pod  <pod-name>

References:  https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/


ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.


apiVersion: v1

kind: ReplicationController


  name: nginx


  replicas: 3


    app: nginx



      name: nginx


        app: nginx



      – name: nginx

        image: nginx


        – containerPort: 80

Kubectl   create  -f  replication.yaml

Kubectl   get  rc

Kubectl  get   pods

Kubectl   describe  rc  nginx

Now delete any pod replica from this replication controller:

Kubectl   delete pod <pod-name>

Kubectl get pods

Kubectl delete  -f   replication.yaml

References: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/


A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.


apiVersion: apps/v1

kind: ReplicaSet


  name: frontend


    app: guestbook

    tier: frontend


  # modify replicas according to your case

  replicas: 3



      tier: frontend




        tier: frontend



      – name: httpd

        image: httpd

kubectl   create   -f replicaset.yaml

kubectl   get  rs

kubectl   get   pods

kubectl describe  rs frontend

kubectl delete   -f   replicaset.yaml

References : https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/


Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.


apiVersion: apps/v1

kind: Deployment


  name: nginx-deployment


    app: nginx


  replicas: 3



      app: nginx




        app: nginx



      – name: nginx

        image: nginx:1.14.2


        – containerPort: 80

Kubectl  create   -f  nginx-deployment.yaml

Kubectl  get  deploy

kubectl   create   deployment   <deploy-name>   –image=image_name    –replicas=No_of_pods

kubectl   create   deployment   <deploy-name>   –image=image_name    –replicas=No_of_pods   –dry-run=client   -o   yaml    >   deploy1.yaml

kubectl get deploy

kubectl describe deploy   <deploy_name>

kubectl label deploy   <deploy_name>

kubectl delete   deploy   <deploy_name>

kubectl   scale   deploy   <deploy_name>  –replicas=no_of_replicas

  • To upgrade the deployment

kubectl set image deploy  <deploy_name>  container_name=Image_name

kubectl rollout history   deploy   <deploy-name>

kubectl rollout history    deploy   <deploy_name>  –revision=value

  • To perform the rollback

Kubectl rollout undo deploy   <deploy-name>   –to-revision=value_of_revision

References: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/


  1. Node-name based scheduling

nodeName is a more direct form of node selection than affinity or nodeSelector. nodeName is a field in the Pod spec. If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node. Using nodeName overrules using nodeSelector or affinity and anti-affinity rules.

Some of the limitations of using nodeName to select nodes are:

  • If the named node does not exist, the Pod will not run, and in some cases may be automatically deleted.
  • If the named node does not have the resources to accommodate the Pod, the Pod will fail and its reason will indicate why, for example OutOfmemory or OutOfcpu.
  • Node names in cloud environments are not always predictable or stable.

Here is an example of a Pod spec using the nodeName field:

apiVersion: v1

kind: Pod


  name: nginx



  – name: nginx

    image: nginx

  nodeName: kube-01

The above Pod will only run on the node kube-01.

References : https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/

  • Node Labels & selector based scheduling

nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.

apiVersion: v1

kind: Pod


  name: nginx


    env: test



  – name: nginx

    image: nginx

    imagePullPolicy: IfNotPresent


    disktype: ssd

References : https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/

  • Taint toleration

Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite — they allow a node to repel a set of pods.

Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don’t guarantee scheduling: the scheduler also evaluates other parameters as part of its function.

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.

You add a taint to a node using kubectl taint:

kubectl   taint  node    node1.example.com    key=value:NoSchedule

To remove the taint added by the command above, you can run:

kubectl   taint  node    node1.example.com    key:NoSchedule-

You specify a toleration for a pod in the PodSpec. Both of the following tolerations “match” the taint created by the kubectl taint line above, and thus a pod with either toleration would be able to schedule onto node1:


key: “key1”

  operator: “Equal”

  value: “value1”

  effect: “NoSchedule”


key: “key1”

  operator: “Exists”

  effect: “NoSchedule”

Here’s an example of a pod that uses tolerations:


apiVersion: v1

kind: Pod


  name: nginx


    env: test



  – name: nginx

    image: nginx

    imagePullPolicy: IfNotPresent


  – key: “example-key”

    operator: “Exists”

    effect: “NoSchedule”

References : https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

  • Affinity & anti-affinity

nodeSelector is the simplest way to constrain Pods to nodes with specific labels. Affinity and anti-affinity expands the types of constraints you can define. Some of the benefits of affinity and anti-affinity include:

  • The affinity/anti-affinity language is more expressive. nodeSelector only selects nodes with all the specified labels. Affinity/anti-affinity gives you more control over the selection logic.
  • You can indicate that a rule is soft or preferred, so that the scheduler still schedules the Pod even if it can’t find a matching node.
  • You can constrain a Pod using labels on other Pods running on the node (or other topological domain), instead of just node labels, which allows you to define rules for which Pods can be co-located on a node.

The affinity feature consists of two types of affinity:

  • Node affinity functions like the nodeSelector field but is more expressive and allows you to specify soft rules.
  • Inter-pod affinity/anti-affinity allows you to constrain Pods against labels on other Pods.


apiVersion: v1

kind: Pod


  name: with-pod-affinity





      – labelSelector:


          – key: security

            operator: In


            – S1

        topologyKey: topology.kubernetes.io/zone



      – weight: 100




            – key: security

              operator: In


              – S2

          topologyKey: topology.kubernetes.io/zone


  – name: with-pod-affinity

    image: registry.k8s.io/pause:2.0

References : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

  • Cordon Uncordon & drain
kubectl cordon node2.example.com
kubectl get no
kubectl uncordon node2.example.com
kubectl get no
      kubectl  drain  node2.example.com   --ignore-daemonsets  --force

References : https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/


A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.

A ConfigMap is not designed to hold large chunks of data. The data stored in a ConfigMap cannot exceed 1 MiB. If you need to store settings that are larger than this limit, you may want to consider mounting a volume or use a separate database or file service.

apiVersion: v1

kind: Pod


  name: configmap-demo-pod



    – name: demo

      image: alpine

      command: [“sleep”, “3600”]


        # Define the environment variable

        – name: PLAYER_INITIAL_LIVES # Notice that the case is different here

                                     # from the key name in the ConfigMap.



              name: game-demo           # The ConfigMap this value comes from.

              key: player_initial_lives # The key to fetch.




              name: game-demo

              key: ui_properties_file_name


      – name: config

        mountPath: “/config”

        readOnly: true


  # You set volumes at the Pod level, then mount them into containers inside that Pod

  – name: config


      # Provide the name of the ConfigMap you want to mount.

      name: game-demo

      # An array of keys from the ConfigMap to create as files


      – key: “game.properties”

        path: “game.properties”

      – key: “user-interface.properties”

        path: “user-interface.properties”

For this example, defining a volume and mounting it inside the demo container as /config creates two files, /config/game.properties and /config/user-interface.properties, even though there are four keys in the ConfigMap. This is because the Pod definition specifies an items array in the volumes section. If you omit the items array entirely, every key in the ConfigMap becomes a file with the same name as the key, and you get 4 files.

kubectl create configmap   <cm-name>  –from-literal=key1=value1  –from-file=file-path

References : https://kubernetes.io/docs/concepts/configuration/configmap/ 


A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don’t need to include confidential data in your application code.

Because Secrets can be created independently of the Pods that use them, there is less risk of the Secret (and its data) being exposed during the workflow of creating, viewing, and editing Pods. Kubernetes, and applications that run in your cluster, can also take additional precautions with Secrets, such as avoiding writing secret data to nonvolatile storage.


apiVersion: v1


  username: YWRtaW4=

  password: MWYyZDFlMmU2N2Rm

kind: Secret



    kubectl.kubernetes.io/last-applied-configuration: { … }

  creationTimestamp: 2020-01-22T18:41:56Z

  name: mysecret

  namespace: default

  resourceVersion: “164619”

  uid: cfee02d6-c137-11e5-8d73-42010af00002

type: Opaque

kubectl create  -f secret1.yaml

kubectl create secret generic  <secret-name>  –from-literal=key1=value1   –from-file=file-path

References : https://kubernetes.io/docs/concepts/configuration/secret/

Persistent volume + accessModes + ReclaimPolicy

PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany).

Access Modes:

A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.

The access modes are:


the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.


the volume can be mounted as read-only by many nodes.


the volume can be mounted as read-write by many nodes.


the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.

References : https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Configure Application with PV PVC:

References : https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

Create a PersistentVolume


apiVersion: v1

kind: PersistentVolume


  name: task-pv-volume


    type: local


  storageClassName: manual


    storage: 10Gi


    – ReadWriteOnce


    path: “/mnt/data”

kubectl apply   -f   pv-volume.yaml

kubetcl   get  pv

Create a PersistentVolumeClaim


apiVersion: v1

kind: PersistentVolumeClaim


  name: task-pv-claim


  storageClassName: manual


    – ReadWriteOnce



      storage: 3Gi

kubectl apply   -f   pv-claim.yaml

kubectl get pv

kubetcl   get  pvc

Create a Pod


apiVersion: v1
kind: PersistentVolume
  name: task-pv-volume
    type: local
  storageClassName: manual
    storage: 10Gi
    - ReadWriteOnce
    path: "/mnt/data"

kubectl apply   -f  pv-volume.yaml

kubectl get pv

kubectl get pvc

kubectl get po

kubectl exec -it task-pv-pod — /bin/bash

In your shell, verify that nginx is serving the index.html file from the hostPath volume:

# Be sure to run these 3 commands inside the root shell that comes from
# running "kubectl exec" in the previous step
apt update
apt install curl
curl http://localhost/

Clean up

Delete the Pod, the PersistentVolumeClaim and the PersistentVolume:

kubectl delete pod task-pv-pod
kubectl delete pvc task-pv-claim
kubectl delete pv task-pv-volume

Configure Resource quota on namespace:

References : https://kubernetes.io/docs/concepts/policy/resource-quotas/

Configure Memory and CPU Quotas for a Namespace:

References : https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/

RBAC[Role Based Access Control] :

References : https://kubernetes.io/d[00ocs/reference/access-authn-authz/rbac/

  1. Create api_config file for user
  2. Create a directory for certificate for user

Mkdir <dir-name>

Cd <dir-name>

  • Copy /etc/kubernetes/pki/ca.crt OR /etc/kubernetes/pki/ca.key file in this directory

Cp /etc/kubernetes/pki/ca.crt    .

Cp /etc/kubernetes/pki/ca.key    .

  • Generate key for user

Openssl genrsa  -out john.key 2048

  • Generate CSR(Certificate Signing Request) for user

Openssl req  -new  -key john.key   -out  john.csr

  • Generate certificate for user

openssl  x509  -req  -in  john.csr  -CA  ca.crt  -CAkey  ca.key    -CAcreateserial  -out john.crt  -days  365

now you have your certificate with key

  • cp /etc/kubernetes/admin.conf    config
  • cat john.crt | base64   -w0    àcopy this in config file “client-certificate-data”
  • cat  john.key | base64   -w0  à copy this in config file “client-key-data”
  • change the context as well in config file

useradd  -s /bin/bash  -d /home/john  -m  john

cp  config /home/john/

chown john:john /home/john/config

su – john

mkdir .kube

mv config .kube/

run the kubectl commands: kubectl get po/no/ns/pv/pvc

As a root user:

Kubectl create role   –help

Kubectl create  role  role1  –verb=list,watch,get   –resource=pods

Kubectl create rolebinding   –help

kubectl create rolebinding  role1-binding  –role role1 –user=john  -n amazon

Kubectl create clusterrole   –help

Kubectl create  clusterrole  cluster-role1  –verb=list,watch,get   –resource=pods

Kubectl create clusterrolebinding   –help

kubectl create clusterrolebinding  clusterrole1-binding  –clusterrole cluster-role1 –user=john

kubectl create clusterrolebinding  clusterrole2-binding  –clusterrole admin –user=john

Network Policy:

References : https://kubernetes.io/docs/concepts/services-networking/network-policies/

kubectl create   -f   netpol1.yaml

kubectl   get  netpol


References : https://kubernetes.io/docs/concepts/services-networking/service/

kubectl expose deployment  <deploy-name>  –name=<svc-name>   –port=<service-port>

kubectl get svc

kubectl expose deployment  <deploy-name>  –name=<svc-name>   –port=<service-port>  –type=NodePort

kubectl get svc


  1. Installation Ingress Controller

References : https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

Kubernetes nginx ingress controller configuration

kubectl create ns prod

kubectl config set-context –current –namespace prod

kubectl config get-contexts

kubectl get po


vim deploy.yaml

kubectl create -f deploy.yaml

kubectl get all

kubectl expose deploy grras-deploy1 –name=svc1 –type=ClusterIP –port=80 –target-port=80

kubectl get all

Clone the Ingress Controller repo and change into the deployments folder

git clone https://github.com/nginxinc/kubernetes-ingress.git –branch v2.2.0

cd kubernetes-ingress/deployments

Configure RBAC

Create a namespace and a service account for the Ingress Controller:

kubectl apply -f common/ns-and-sa.yaml

Create a cluster role and cluster role binding for the service account:

kubectl apply -f rbac/rbac.yaml

Create Common Resources

Create a secret with a TLS certificate and a key for the default server in NGINX:

kubectl apply -f common/default-server-secret.yaml

Create a config map for customizing NGINX configuration:

kubectl apply -f common/nginx-config.yaml

Create an IngressClass resource:

kubectl apply -f common/ingress-class.yaml

Create Custom Resources

Note: By default, it is required to create custom resource definitions for VirtualServer, VirtualServerRoute, TransportServer and Policy. Otherwise, the Ingress Controller pods will not become Ready. If you’d like to disable that requirement, configure -enable-custom-resources command-line argument to false and skip this section.

Create custom resource definitions for VirtualServer and VirtualServerRouteTransportServer and Policy resources:

kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_policies.yaml

Deploy the Ingress Controller

kubectl apply -f daemon-set/nginx-ingress.yaml

Check that the Ingress Controller is Running

kubectl get pods --namespace=nginx-ingress
  • Ingress Resource


vim ingress.yaml

apiVersion: networking.k8s.io/v1

kind: Ingress


  name: ingress-resource


    kubernetes.io/ingress.class: nginx



  – host: “www.amazon1.com”



      – pathType: Prefix

        path: /



            name: svc1


              number: 80

kubectl create –f ingress.yaml

ETCD backup & Restore:

References : https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/

ETCD Backup command:

ETCDCTL_API=3 etcdctl --endpoints= \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot save <backup-file-location>

ETCD Restore command:

ETCDCTL_API=3 etcdctl --endpoints snapshot restore snapshotdb

Kubeadm Version Upgrade:

References : https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

Determine which version to upgrade to

apt update
apt-cache madison kubeadm

Upgrading control plane nodes

apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.23* && \
apt-mark hold kubeadm

Verify that the download works and has the expected version

kubeadm version

Verify the upgrade plan:

kubeadm upgrade plan

Choose a version to upgrade to, and run the appropriate command. For example:

# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v1.23*

Drain the node

  • Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
·         # replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets

Upgrade kubelet and kubectl:

Upgrade the kubelet and kubectl:

·         # replace x in 1.24.x-00 with the latest patch version
·         apt-mark unhold kubelet kubectl && \
·         apt-get update && apt-get install -y kubelet=1.23* kubectl=1.23* && \
·         apt-mark hold kubelet kubectl

Restart the kubelet:

  • sudo systemctl daemon-reload

sudo systemctl restart kubelet

Uncordon the node:

Bring the node back online by marking it schedulable:

·         # replace <node-to-drain> with the name of your node
kubectl uncordon <node-to-drain>


currently there’s three specific sub-objectives mentioning Troubleshooting

  • Troubleshoot application failure
  • Troubleshoot cluster component failure
  • Troubleshoot networking

Troubleshoot application failure

If our application has an error I’d first start with checking if the Pod was running, and in case of multiple replicas, if there’s an issue on all or just a few. One important thing here is to try to find out if the failure is because of something inside the Application, or if there’s an error running the Pod, or maybe the Service it is a part of.

To check if a pod is running we can start with the kubectl get pods command. This outputs the status of the Pod

We can continue with a kubectl describe pod <pod-name> to further investigate the Pod. Status messages appear here.

We also have the kubectl logs <pod-name> which outputs the stdout and stderr from the containers in the Pod.

Pod failure

A Pod failure can show it self in a few ways. We’ll quickly take a look of some of them, most of them gives clues and details through the kubectl describe pod <pod-name> command.

  • Pod stays in Pending state
    • The pod doesn’t get scheduled on a Node. Often because of insufficient resources
    • Fix with freeing up resources, or add more nodes to the cluster
    • Might also be because of a Pod requesting more resources than is available
  • Pod stays in Waiting state
    • Pod has been scheduled, but cannot start. Most commonly because of an issue with the image not being pulled
    • Check that the image name is correct, and if the image is available in the registry, and that it can be downloaded
  • Pod is in ImagePullBackOff
    • As the previous, there’s something wrong with the image.
    • Check if the image name is correct, and that the image exists in the registry, and that it can be downloaded
  • Pod is in CrashLoopBackOff
    • This status comes when there’s an error in a container that causes the Pod to restart. If the error doesn’t get fixed by a restart it’ll go in a loop (depending on the RestartPolicy of the Pod)
    • Describe the Pod to see if any clues can be found on why the Pod crashes
  • Pod is in Error
    • This can be anything.. Describe the pod and check logs
    • If nothing can be found in logs, can the Pod be recreated?
    • Export existing spec with kubectl get pod <pod-name> -o yaml > <file-name>

Troubleshoot network failure

Service failure

  • If a service is not working as expected the first step is to run the kubectl get service command to check your service status. We continue with the kubectl describe service <service-name> command to get more details
  • Based on the type of failure we have a few steps that can be tried
  • If the service should be reachable by DNS name
  • Check if you can do a nslookup from a Pod in the same namespace. If this work, test from a different namespace
  • Check if other services are available through DNS. If not there might be a cluster level error
    • Test a nslookup to kubernetes.default. If this fails there’s an error with the DNS service and not your service or application
  • If DNS lookup is working, check if the service is accessible by IP
    • Try a curl or wget from a Pod to the service IP
    • If this fails there’s something wrong with your service or network
  • Check if the service is configured correctly, review the yaml and double check
  • Check if the service has any endpoints, kubectl get endpoints <service-name>
  • If endpoints are created, check if they point to the correct Pods, kubectl get pods -o wide
  • If no endpoints are created, we might have a typo in the service selector so that the service doesn’t find any matching pods
  • Finally it might be worth checking the kube-proxy
    • Check if the proxy is running, “ps aux | grep kube-proxy”
    • Check the system logs, i.e. journalctl

CNI plugin

Early on in a cluster we might also have issues with the CNI plugin installed. Be sure to check the status of the installed plugin(s) and verify that they are working

Troubleshoot cluster failure

First thing to check if we suspect a cluster issue is the kubectl get nodes command. All nodes should be in Ready state.

Node failure

If a Node has an incorrect state run a kubectl describe node <node-name> command to learn more.

Check the kubectl get events for errors

We also have the kubectl cluster-info dump command which gives lots of details about the cluster, as well as the overall health.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *