Before we get into what Kubernetes is, let us first discuss why we need it.
When we deploy multiple containers of our application, it becomes difficult to manage them in production-like environments. Multiple containers on a single server can be difficult to run and manage. Several scenarios must be considered, such as making the infrastructure highly available, scalable, secure, and, most importantly, easily manageable.
Now, let’s have a look at the drawbacks of operating containers in standalone mode and the motivation for orchestration.
- There is no support for on-demand auto scaling.
- Installing storage systems, monitoring resources, and discovering services are all complex chores.
- Load balancing, application health checks, and instance replication are not available.
- Downtime in between releases
- Lack of identification and authority
We achieve the following with orchestration:
- Load balancing and better resource utilisation
- Faster application deployment, rolling updates with zero downtime
- Simplified configurations and portability of application package for multi-environment
- More than just scheduling, ease of use for scaling and service replication
Now, let’s get back to the point of this blog: what is Kubernetes?
Kubernetes is an open-source framework for automating containerized application deployment, scaling, and administration. Kubernetes may be installed and managed on all major cloud platforms, including AWS, Google Cloud Platform, and Azure. It can also be used on-premises or in a hybrid environment. Kubernetes hosts our apps as containers in an automated manner, allowing us to simply deploy as many instances of our application as possible.
Let’s continue with a high-level overview of the Kubernetes architecture.
etcd, API Server, Scheduler, and Controller Manager are common components of the Master node.
It is a database that holds data in key-value format.
Kube API Server
- Serves as the control plane’s front end
- Provides a REST API and consumes JSON and YAML (manifest files)
- Kubernetes’ core management component
- Monitors the Kube API Server for new pods and assigns tasks to the worker node
Kube Controller Supervisor
- Keeps an eye out for changes and assists in maintaining the cluster’s present state to the desired operational state.
- Also helps in the resolution of problems
- A typical K8s worker node has a Kubelet, a Kube proxy, and a container engine
- Serves as the primary Kubernetes agent and the only point of contact for the k8s master
- Registers a node in the cluster, monitors the API server for pod instantiation, and reports to the master
- A process that executes on each node of the k8s cluster
- Its pod networking is based on IP table rules
- Monitors new services in the cluster and balances load across all pods of a service
- Uses several container engines such as Docker and Rocket to manage containers- pulling images, starting, and stopping images, and so on
|A high-level overview of the K8s method||To get started with objects, you can use a specification file like the one below. It’s a definitions-only yaml file.|
- Refers to the Kubernetes version
- There are various versions available, each with its own set of objects. v1, apps/v1 are a few examples
This is the section in which we discuss the object type K8s. We’re going to build a deployment in this sample.
The metadata block contains a brief description of the object. It contains the object’s name (in this case, deployment), the labels, and the annotation. We can construct as many labels as we like, and we are not restricted to using specific words as labels.
- We define the desired state of our object in the spec section
Several terms in this segment are unfamiliar to us; let us discuss them in detail.
Kubernetes does not directly deploy containers on working nodes. The containers are contained within a k8s object known as PODs. A pod is a single application instance and is one of the smallest objects we make in k8s.
- A pod is a container or group of containers that are deployed on the same host
- They are always co-located, co-scheduled, and run in a shared context
- A pod can have one or more containers; best practice is to deploy a single container in a pod
- We can generally replace the word “pod” with “container”
Replicaset is a more complex replication controller option. Apps/v1 and later versions support replicasets. It employs the matchLabels specified in the selector option.
- The role of the replica set is to monitor the pods and redeploy them if they fail
- It is utilised when we wish to run many pods of our application at the same time
- Replicaset can manage pods that were not produced as part of the replica set construction process (uses labels and selectors)
Labels and Selectors
Labels and selectors are used by replicaset to maintain the appropriate state when handling 100s or 1000s of pods or production-like environments.
- The client/user can identify a group of objects using the label selector.
- In Kubernetes, the label selector is the primary grouping primitive.
- It can be applied to items at the time of creation and later added to and updated
Deployment allows us to upgrade the underlying instance of our application using a variety of methodologies, such as rolling upgrades.
- Pod template, count, and label selector are all part of the deployment
- K8s will attempt to maintain the desired number of pods matching the label selector
- If the current deployment is unstable, we can roll back to a previous deployment revision
- A deployment can be paused and resumed
- K8s services facilitates communication between internal and external components of an application
- Functions as an internal load balancer, directing traffic to several pods on various nodes
- Service applications, such as ui-api-database three-tier architecture apps, help pods communicate with one another.
NodePort, ClusterIP (default), and LoadBalancer are examples of different sorts of services. Each of them has its own set of applications, with ClusterIP being the most prevalent.
- A service is a collection of running pods that may be accessed via a virtual IP address.
- It functions as a load balancer for numerous pods
Namespaces are logical partitions for isolating groups of applications and providing them with the resources they demand. We can separate cluster resources (CPU, RAM, storage, network in/out, and so on) for different types of applications so that one group does not interfere with the operation of the other when there is a lot of traffic. K8s automatically establishes three namespaces: kube-system, default, and kube-public.
- A mechanism for dividing a cluster into logically named groups, such as development, quality assurance, testing, and production.
- Namespaces prevent you from using more resources than you’re allowed (defined for a namespace)
- In situations with many users dispersed across many teams or projects, a namespace can be employed.
- It enables us to grant permission to trusted users
- It provides scope for named resources (to avoid naming clashes)
Do keep an eye on this space for more blogs on Kubernetes Architecture.
This blog was written by Kailash Verma, an innovative Technology Lead at Tavisca Solutions who believes in maximizing his productivity to quickly respond to the changing business needs. He is passionate about DevOps practices, Amazon Web Services, Docker: a container platform, Container Orchestration, Monitoring, CI/CD, Databases and brings his technological ideas to life through his write-ups.