When you deploy Kubernetes, you get a cluster.
A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
This document outlines the various components you need to have for a complete and working Kubernetes cluster.
A Kubernetes cluster is a group of nodes (physical or virtual machines) that are used to run containerized applications. The architecture of a Kubernetes cluster typically consists of the following components:
Nodes: A node is a physical or virtual machine that runs applications and is managed by the Kubernetes cluster. Nodes are usually grouped into a single logical unit called a "node pool," which is managed by a node controller.
Master nodes: Master nodes are nodes that host the control plane components of the Kubernetes cluster. These components include the API server, etcd, and the scheduler. The master nodes are responsible for managing the nodes in the cluster and scheduling the deployment of applications.
Worker nodes: Worker nodes are nodes that host the applications that are deployed on the Kubernetes cluster. They run the container runtime and the kubelet, which is responsible for managing the containers on the node.
Pods: A pod is the basic unit of deployment in Kubernetes. It consists of one or more containers that are co-located on the same node and share the same network namespace. Pods are used to host the applications that are deployed on the cluster.
Services: A service is a logical grouping of pods that provides a stable endpoint for accessing the applications running in the pods. Services are typically used to load balance traffic to the pods and allow for easy access to the applications.
Deployments: A deployment is a Kubernetes resource that is used to manage the deployment of applications on the cluster. It consists of a desired state and a current state, and the deployment controller is responsible for reconciling the two states and ensuring that the desired state is achieved.
Overall, the architecture of a Kubernetes cluster consists of nodes, master nodes, worker nodes, pods, services, and deployments, which work together to manage the deployment and execution of containerized applications on the cluster.
What is inside Master nodes:
Master nodes in a Kubernetes cluster are nodes that host the control plane components of the cluster. These components are responsible for managing the nodes in the cluster and scheduling the deployment of applications. The control plane components of a Kubernetes master node typically include:
API server: The API server is the central component of the Kubernetes control plane. It exposes a RESTful API that is used to manage the resources in the cluster, such as pods, services, and deployments.
etcd: etcd is a distributed key-value store that is used to store the persistent state of the cluster. It stores information about the resources in the cluster, such as the current state of the pods and services, and is used by the API server to manage the cluster.
Scheduler: The scheduler is responsible for scheduling the deployment of applications on the cluster. It receives requests from the API server to deploy applications and determines which nodes in the cluster are suitable for hosting the applications.
kube-controller-manager: The kube-controller-manager is a daemon that runs on the master node and is responsible for managing the controllers in the cluster. Controllers are responsible for reconciling the desired state of the cluster with the current state and ensuring that the desired state is achieved.
Overall, the control plane components of a Kubernetes master node are responsible for managing the nodes in the cluster and scheduling the deployment of applications. They work together to ensure that the desired state of the cluster is achieved and maintained.
what is inside the worker node
Worker nodes in a Kubernetes cluster are nodes that host the applications that are deployed on the cluster. They run the container runtime and the kubelet, which is responsible for managing the containers on the node.
The components of a Kubernetes worker node typically include:
Container runtime: The container runtime is responsible for running the containers on the node. It is typically based on a technology like Docker or containerd and is used to manage the lifecycle of the containers, including starting, stopping, and deleting them.
Kubelet: The kubelet is a daemon that runs on the worker node and is responsible for managing the containers on the node. It communicates with the API server to receive instructions on which containers to run and monitors the health of the containers.
Pod: A pod is the basic unit of deployment in Kubernetes. It consists of one or more containers that are co-located on the same node and share the same network namespace. Pods are used to host the applications that are deployed on the cluster.
Container: A container is a lightweight, standalone, and executable package that contains everything that is needed to run an application, including the code, runtime, system tools, and libraries. Containers are isolated from each other and from the host system, which makes them a convenient and portable way to deploy applications.
Overall, the components of a Kubernetes worker node are responsible for running and managing the containers that host the applications deployed on the cluster. They work together to ensure that the applications are running as intended and are able to respond to requests from clients.
~~~~~~~~~~~~~~~~~~~~~~~~ The Control plane Node~~~~~~~~~~~~~~~~~~~~~~~~~~
The control plane is the central control center of a Kubernetes cluster and is responsible for maintaining the desired state of the cluster. It consists of several components, including:
The Kubernetes API server: This is the primary interface for interacting with the cluster and is responsible for receiving and processing requests from clients (such as kubectl or other tools) and updating the cluster's state accordingly.
etcd: This is a distributed key-value store that is used to store the cluster's configuration and state. It is used by the Kubernetes API server to store and retrieve information about the pods, services, and other resources in the cluster.
The scheduler: This is a component that is responsible for assigning pods to worker nodes in the cluster. It selects the most suitable node for a pod based on various factors, such as the available resources on the node and the pod's resource requirements.
The controller manager: This is a component that runs various controllers that are responsible for ensuring that the desired state of the cluster is maintained. The controller manager includes controllers for tasks such as replicating pods, reconciling service endpoints, and enforcing resource quotas.
Overall, the control plane is the central control center of a Kubernetes cluster and is responsible for managing and coordinating the various components and resources in the cluster to ensure that the desired state is maintained.
~~~~~~~~~~~~~~~~~~~~~The Kubernetes API server~~~~~~~~~~~~~~~~~~~~~~~
The Kubernetes API: (also known as the kube-apiserver) is the primary interface for interacting with a Kubernetes cluster. It is a RESTful API that allows you to create, read, update, and delete (CRUD) various resources in the cluster, such as pods, services, and deployments.
The Kubernetes API is the central control plane of the cluster and is responsible for maintaining the desired state of the cluster.
It receives requests from clients (such as kubectl or other tools) and updates the cluster's state accordingly.
The API also exposes various endpoints that allow clients to retrieve information about the cluster and its resources.
The Kubernetes API is implemented using the Go programming language and is built on top of the etcd distributed key-value store.
It is designed to be horizontally scalable and highly available, with multiple instances of the API server running in the cluster for redundancy.
In addition to the core Kubernetes API, there are also several extension APIs that provide additional functionality, such as the API server aggregator, which allows you to add custom APIs to the cluster, and the Admission Control API, which allows you to customize the behavior of the API server.
Overall, the Kubernetes API is a critical component of a Kubernetes cluster and is the primary interface for interacting with and managing the cluster.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The etcd~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
etcd is a distributed key-value store that is used to store the configuration and state of a distributed system, such as a Kubernetes cluster.
It is a highly available and consistent data store that can be used to store data that needs to be shared across multiple nodes in a distributed system.
In Kubernetes, etcd is used to store the cluster's configuration and state, including information about the pods, services, and other resources in the cluster. The Kubernetes API server uses etcd to store and retrieve this information, allowing it to maintain the desired state of the cluster and ensure that the pods and containers are running as expected.
etcd is implemented as a distributed database that uses the Raft consensus algorithm to ensure that the data stored in the database is consistent and highly available. It is designed to be scalable and can handle a large number of reads and writes.
Overall, etcd is a critical component of a Kubernetes cluster and is used to store and manage the configuration and state of the cluster. It plays a key role in ensuring that the desired state of the cluster is maintained and that the pods and containers are running as expected.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Scheduler~~~~~~~~~~~~~~~~~~~~~~~~~
In Kubernetes, the scheduler is a component of the control plane that is responsible for assigning pods to worker nodes in the cluster. The scheduler selects the most suitable node for a pod based on various factors, such as the available resources on the node, the pod's resource requirements, and any specific constraints or preferences defined in the pod's configuration.
The scheduler is responsible for ensuring that the pods are evenly distributed across the nodes in the cluster and that the pods are placed on nodes that have the necessary resources to run them. It also ensures that the pods are rescheduled on different nodes if a node fails or becomes unavailable.
The scheduler is implemented as a standalone process that runs on the master nodes of the cluster. It communicates with the Kubernetes API server to receive updates about the pods and nodes in the cluster and to make scheduling decisions based on the current state of the cluster.
The scheduler can be configured with various policies and constraints to control how pods are placed on nodes. For example, you can specify that certain pods should be co-located on the same node or that certain pods should be placed on nodes with specific hardware or software configurations.
Overall, the scheduler is a critical component of a Kubernetes cluster and plays a key role in ensuring that the pods are placed on the most suitable nodes and that the cluster is used efficiently.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Controller Manager~~~~~~~~~~~~~~~~~
The controller manager is a component of the Kubernetes control plane that runs various controllers that are responsible for ensuring that the desired state of the cluster is maintained. The controller manager includes controllers for tasks such as replicating pods, reconciling service endpoints, and enforcing resource quotas.
Each controller is a loop that runs continuously in the background, checking the current state of the cluster against the desired state and making any necessary changes to bring the cluster back into alignment. For example, the ReplicationController ensures that the desired number of replicas of a pod are running at any given time, while the ServiceController ensures that the service endpoints are correctly reconciled with the pods in the cluster.
The controller manager is implemented as a standalone process that runs on the master nodes of the cluster. It communicates with the Kubernetes API server to receive updates about the pods, services, and other resources in the cluster and to make any necessary changes to the cluster's state.
Overall, the controller manager is a critical component of the Kubernetes control plane and is responsible for ensuring that the desired state of the cluster is maintained and that the pods and containers are running as expected.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~kubelet~~~~~~~~~~~~~~~~~
The kubelet is a core component of a Kubernetes cluster. It is a process that runs on each node in the cluster and is responsible for managing the pods and containers running on that node.
The main purpose of the kubelet is to ensure that the desired state of the pods and containers on the node is maintained. It does this by constantly checking the status of the pods and containers and making any necessary adjustments to ensure that they are running as expected.
The kubelet works closely with the Kubernetes API server to receive instructions from the control plane about the desired state of the pods and containers on the node. It then uses various tools and utilities to manage the pods and containers, such as the container runtime (e.g., Docker) and the network plugin.
Some of the key tasks performed by the kubelet include:
Starting and stopping pods and containers based on the desired state
Monitoring the health of pods and containers and taking action if necessary (e.g., restarting a container that has crashed)
Reporting the status of pods and containers to the API server
Mounting volumes and secrets for pods
Configuring the network namespace for pods
Overall, the kubelet plays a critical role in ensuring that the pods and containers on a node are running smoothly and that the desired state of the node is maintained.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~Kubeproxy~~~~~~~~~~~~~~~~~~~~~~~~
The kube-proxy is a component of a Kubernetes cluster that runs on each node and is responsible for implementing the cluster's networking rules. It is responsible for forwarding network traffic to the correct pods and services in the cluster.
The main purpose of the kube-proxy is to ensure that network traffic is routed correctly within the cluster and that the pods and services are accessible from outside the cluster. It does this by implementing the networking rules defined in the cluster's Services and Ingress resources.
The kube-proxy works closely with the Kubernetes API server to receive updates about the cluster's networking rules and to learn about the pods and services running on the node. It then uses various networking tools and utilities, such as iptables or ipvs, to implement the networking rules and forward traffic to the correct pods and services.
Some of the key tasks performed by the kube-proxy include:
1.Forwarding traffic to the correct pods and services based on the cluster's networking rules
2.Load balancing traffic across multiple replicas of a service
3.Exposing services to external clients using Ingress resources
4.Implementing network policies to control which pods and services can communicate with each other
Overall, the kube-proxy plays a critical role in ensuring that network traffic is routed correctly within the cluster and that the pods and services are accessible from outside the cluster.
~~~~~~~~~~~~~~~~~~~~~~~~Container runtime~~~~~~~~~~~~~~~~~~~
A container runtime is the software that is responsible for executing and managing containers on a host operating system. It is the interface between the containers and the underlying operating system and provides the necessary tools and utilities to run and manage the containers.
There are several different container runtime options available, including:
Docker: Docker is the most widely used container runtime and is supported by most container orchestration platforms, including Kubernetes. It provides a set of tools and libraries for building, distributing, and running containers.
containerd: containerd is a lightweight container runtime that is designed to be easy to use and integrate with other systems. It is often used as the default container runtime in Kubernetes clusters.
rkt: rkt (pronounced "rocket") is a container runtime that is designed to be lightweight and secure. It is often used as an alternative to Docker in environments where security is a top priority.
CRI-O: CRI-O is a container runtime that is specifically designed for use with Kubernetes. It is built on top of OCI (Open Container Initiative) compliant runtimes and is designed to be lightweight and modular.
Overall, the choice of container runtime will depend on the specific needs of your environment and the container orchestration platform you are using. Some runtimes may be better suited for certain use cases or environments, so it's important to choose the runtime that best meets your needs.
No comments:
Post a Comment