KUBERNETES OVERVIEW
To deploy a single containerized application, or you manage a handful of them, it's simple enough to do with existing tools.
When we are developing dozens of applications that are made of multiple containers each,
we need container orchestration.
Container orchestration is defined as a system for automatically deploying, managing and scaling containerized applications on a group of servers.
In short, orchestration is to containers, but cluster management is to virtual machines.
When instructed to do so, a container orchestrator finds a suitable host to run your container image.
This is often called scheduling.
Furthermore, container orchestrator enables service discovery, which allows containers to discover each other automatically, even as they move between hosts.
An orchestrator also provides load balancing for safe containers.
The container orchestrator make sure that your applications are highly available.
It monitors the health of your containers.
In case of failures, an orchestrator automatically re provisions the containers, and if necessary, schedules them into another host.
An orchestrator can also provide resiliency against host failures by ensuring anti affinity, meaning that the containers are scheduled into separate hosts.
Finally, an orchestrator adds and removes instances of your containers to keep up with demand.
It can even take advantage of the scaling rules when upgrading your application, in order to avoid any downtime whatsoever.
Kubernetes is a popular open source container orchestrator system.
In Kubernetes, the logical grouping for one or more containers is called a pod,
similarly to the collective noun of whales, a group of whales is called a pod.
This is of course reference to the popular Moby container runtime.
Containers in a pod share storage, network and other specifications.
They can for example, connect to each other through local host, and they share IP addresses and pods.
Typically, application front end and back ends are separated into their own pods.
This allows for independent scaling and upgrading.
A Kubernetes service is a set of pods that is exposed as a network service, such as a load balancer or a static IP address.
When pods are exposed as a service, they can be discovered by other applications in the Kubernetes cluster.
Services can also be exposed outside of the cluster to the internet.
Kubernetes pods are hosted in nodes.
Nodes are servers that have container runtime and Kubernetes node components installed.
Nodes communicate to the Kubernetes control plane.
The control plane provides the orchestration features, such as scheduling.
Kubernetes Feature
====================
Kubernetes provides integration with local file storage and public cloud providers.
This means that they can mount native cloud storage services as volumes for our container applications running in Kubernetes.
The same applies with secrets,
Kubernetes stores and manage the secrets outside of the pod definition or the container image.
When pods are scheduled to nodes, they request access to the specific secrets at runtime.
Kubernetes lets you scale your application programmatically through a GUI, or automatically based on CPU utilization, or auto metrics.
This is defined in the horizontal pod auto scaler.
And finally, Kubernetes lets you automatically roll out applications or configuration changes, while monitoring the health and availability of your application.
You can start by introducing the new updates only to a handful of pods, and if everything looks good, that Kubernetes roll the changes out to the rest.
If something goes wrong, the changes can even be rolled back to the last known good state, automatically.
Azure
Kubernetes Service or AKS, is a managed cloud service that simplifies building
and managing applications with Kubernetes.
But what
does a manage service mean? In the case of AKS, it means that Microsoft is
taking care of some of the maintenance tasks related to the operation of the
Kubernetes Cluster.
A
Kubernetes cluster is made of a control plane and nodes.
In Azure
Kubernetes Service, the Azure platform manages the control plane for us.
The
Kubernetes nodes are provisioned automatically, but still ultimately our
responsibility.
When you
create an AKS cluster, Microsoft automatically creates and configures the
control plane for you.
The control
plane provides core Kubernetes features such as Pod scheduling, and service
discovery.
The control
plane is visible to us as an Azure Kubernetes Service resource. We can interact
with the control plane using Kubernetes APIs, kubectl , kub or the kubernetes
Dashboard, but we cannot work with the control plane directly.
Microsoft
is responsible for maintaining the control plane, and keeping it highly
available.
If you want
to make changes to our control plane, such as upgrade our kubernates cluster to
a new major version, we can use the Azure Portal, or the that AKS has
commands in Azure CLI.
az aks upgrade --kubernetes-version 1.16.9 --name rakAKSCluster --resource-group RGP-USE-PLH-NP
Your
application containers run in kubernetes nodes.
In AKS,
nodes are our Azure virtual machines and created by the control plane.
For
example, if you want to add a new node to your Kubernetes cluster, you will
simply use the CLI at AKS scale command.
The node
virtual machine as resources will be created with the Ubuntu Linux operating
system, and more will be contained runtime installed.
Additionally,
the kubelet agent and kube proxy are installed and configured.
The AKS
resource creates an amenities to Azure virtual machine, Azure disk, and Azure
virtual network resources for us. They are created into a managed cluster
resource group. The resource group is automatically created and named with the
MC prefix.
Once the
nodes are created, the operating system of the virtual machines stays our
responsibility.
Security
updates are automatically applied to Linux Nodes, but AKS does not
automatically reboot to nodes, to complete the update process.
Node
reboots remain our responsibility.
The AKS
control plane is provided as a free service. Nodes, discs, and networking
resources are all our responsibility, and incur regular costs.
Microsoft
Service Level Agreements, guarantee availability of our nodes.
This means
that Microsoft reimburses us if they do not meet the uptime guarantees.
But as
there is no cost involved with the control plane, there has not been an
official SLA, for kubernetes API server endpoints, so the control plane.
Instead, Microsoft has published a service level objective of two and 1/2 lines
or 99.5%.
Microsoft
has just announced an optional feature for uptime SLA for the control plane
two.
With this
paid feature, you can get an uptime SLA, with a guarantee of three and a half
lines, or 99.95%, with a cluster that uses availability zones.
1.Create a service principal
2.Get the ACR resource ID
3.Create a role assignment
4.Create a aks cluster with name rakAKSCluster and associate appId and Password.
To allow an AKS cluster to interact with other Azure resources such as the Azure Container Registry which we created in a previous
blogan Azure Active Directory (ad) service principal is used.
To create the service principal:
az ad sp create-for-rbac --skip-assignment
Execute this command:-
1.azuser@ubuntutest2020:~$ az ad sp create-for-rbac --skip-assignment
or
az ad sp create-for-rbac --name rakeshServicePrincipal --skip-assignment
its output would be:-
{
"appId": "db45168e-XXXX-4701-a2ed-ae4480db03b1",
"displayName": "azure-cli-2020-08-02-06-44-03",
"name": "http://azure-cli-2020-08-02-06-44-03",
"password": "mYezngEP_XXXXXXX_7aMGarpH2wxUFf9",
"tenant": "8896b7ee-CCCCC-4488-8fe2-05635ccbcf01"
}
Make a note of the appId and password, you will need these. Better yet, save this credential somewhere secure.
2.Get the ACR resource ID:
az acr show --resource-group RGP-USE-PLH-NP --name rakakcourse28 --query "id" --output tsv
output:-
/subscriptions/9239f519-8504-XXXX-ae6f-c84d53ba3714/resourceGroups/RGP-USE-PLH-NP/providers/Microsoft.ContainerRegistry/registries/rakakcourse28
3.Create a role assignment:
az role assignment create --assignee <appId> --scope <acrId> --role Reader
Example:-
az role assignment create --assignee db45168e-XXXX-4701-a2ed-ae4480db03b1 --scope /subscriptions/9239f519-XXXX-4e92-ae6f-c84d53ba3714/resourceGroups/RGP-USE-PLH-NP/providers/Microsoft.ContainerRegistry/registries/rakakcourse28/ --role Reader
4.Create a aks cluster with name rakAKSCluster and associate appId and Password.
azuser@ubuntutest2020:~$ az aks create \
--resource-group RGP-USE-PLH-NP \
--name rakAKSCluster \
--node-count 1 \
--vm-set-type VirtualMachineScaleSets \
--load-balancer-sku standard \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 3 \
--generate-ssh-keys \
--service-principal db45168e-XXXXX-4701-a2ed-ae4480db03b1 --client-secret mYezngEP_XXXX_7aMGarpH2wxUFf9
This will create a cluster (which may take 5-10 minutes).
output:-
SSH key files '/root/.ssh/id_rsa' and '/root/.ssh/id_rsa.pub' have been generated under ~/.ssh to allow SSH access to the VM. If using machines without permanent storage like Azure Cloud Shell without an attached file share, back up your keys to a safe location
-->To display the metadata of the AKS cluster that you've created, use the following command. Copy the principalId, clientId, subscriptionId, and nodeResourceGroup for later use. If the ASK cluster was not created with managed identities enabled, the principalId and clientId will be null.
verification:- az aks show --name rakAKSCluster -g RGP-USE-PLH-NP
The behavior of this command has been altered by the following extension: aks-preview
5. Install kubectl to connect to the kubernetes environment via the Kubernetes CLI.
Once done, we can connect to the kubernetes environment via the Kubernetes CLI.
If you are using the Azure Cloud Shell, the kubernetes client (kubectl) is already installed.
You can also install locally if you haven't previously installed a version of kubectl:
-->az aks install-cli
or
snap install kubectl --classic
kubectl version --client
output:-
Downloading client to "/usr/local/bin/kubectl" from "https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/amd64/kubectl"
Please ensure that /usr/local/bin is in your search PATH, so the `kubectl` command can be found.
Downloading client to "/tmp/tmpzcr2zebh/kubelogin.zip" from "https://github.com/Azure/kubelogin/releases/download/v0.0.6/kubelogin.zip"
Please ensure that /usr/local/bin is in your search PATH, so the `kubelogin` command can be found.
6.Get access credentials for a managed Kubernetes cluster
azuser@ubuntutest2020:~$ az aks get-credentials --resource-group RGP-USE-PLH-NP --name rakAKSCluster --admin
The behavior of this command has been altered by the following extension: aks-preview
Merged "rakAKSCluster-admin" as current context in /home/azuser/.kube/config
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
azuser@ubuntutest2020:~$ kubectl version
output
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.10", GitCommit:"89d8075525967c7a619641fabcb267358d28bf08", GitTreeState:"clean", BuildDate:"2020-06-23T02:52:37Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Check your connection and that the kubernetes cli is working with:
azuser@ubuntutest2020:~$ kubectl get nodes
root@ubuntuserver01:/home/admina# kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nodepool1-32633493-vmss000000 Ready agent 12m v1.17.9
azuser@ubuntutest2020:~$ az acr list --resource-group RGP-USE-PLH-NP --query "[].{acrLoginServer:loginServer}" --output tsv
rakakcourse28.azurecr.io
azuser@ubuntutest2020:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7", GitCommit:"5737fe2e0b8e92698351a853b0d07f9c39b96736", GitTreeState:"clean", BuildDate:"2020-06-24T19:54:11Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
This will tell you both the local client, and the configured kubernetes service version,
make sure the client is at least the same if not newer than the server.
8. Create a hostname.yml file and update the image which we have stored in Azure container registry
download the file from here
azuser@ubuntutest2020:~$ cat hostname.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostname-v1
spec:
replicas: 1
selector:
matchLabels:
app: hostname
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: hostname
version: v1
spec:
containers:
- image: rakakcourse28.azurecr.io/hostname:v1
imagePullPolicy: Always
name: hostname
resources: {}
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hostname
name: hostname
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hostname
sessionAffinity: None
type: LoadBalancer
azuser@ubuntutest2020:~$ vi hostname.yml
9. Apply the yml file
azuser@ubuntutest2020:~$ kubectl apply -f hostname.yml
deployment.apps/hostname-v1 created
service/hostname unchanged
10. Get the external IP of load balance of AKS cluster
azuser@ubuntutest2020:~$ kubectl get svc hostname -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostname LoadBalancer 10.0.180.98 52.191.86.89 80:30182/TCP 23m
azuser@ubuntutest2020:~$ kubectl get svc hostname -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostname LoadBalancer 10.0.180.98 52.191.86.89 80:30182/TCP 33m
11. execute the file locally
azuser@ubuntutest2020:~$ curl http://52.191.86.89
<HTML>
<HEAD>
<TITLE>This page is on hostname-v1-5d7984db8b-qnflf and is version v1</TITLE>
</HEAD><BODY>
<H1>THIS IS HOST hostname-v1-5d7984db8b-qnflf</H1>
<H2>And we're running version: v1</H2>
</BODY>
</HTML>
root@ubuntuserver01:/home/admina# kubectl get pods
NAME READY STATUS RESTARTS AGE
hostname-v1-5d7984db8b-b4fxg 1/1 Running 0 11m
To install the Secrets Store CSI driver, you first need to install Helm.
With the Secrets Store CSI driver interface, you can get the secrets that are stored in your Azure key vault instance and then
use the driver interface to mount the secret contents into Kubernetes pods.
1. Install Helm
2. Install Secrets Store CSI driver
Install Helm and the Secrets Store CSI driver
Install helm
https://helm.sh/docs/intro/install/
From Apt (Debian/Ubuntu)
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Install the Secrets Store CSI driver and the Azure Key Vault provider for the driver:
--> helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts
-->helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --generate-name
Create an Azure key vault and set your secrets
az keyvault create --name "rakaks-Vault2" --resource-group "RGP-USE-PLH-NP" --location eastus
az keyvault secret set --vault-name "rakaks-Vault2" --name "ExamplePassword" --value "hVFkk965BuUv"
az keyvault secret list --vault-name "rakaks-Vault2"
Assign your service principal to your existing key vault.
Here --assignee is AZURE_CLIENT_ID parameter is the appId that you copied after you created your service principal.
-->az role assignment create --role Reader --assignee '6a9171ad-e645-41e0-91d3-404afe478555' --scope '/subscriptions/9239f519-8504-4e92-ae6f-c84d53ba3714/resourceGroups/RGP-USE-PLH-NP/providers/Microsoft.KeyVault/vaults/rakaks-Vault2'
output:-
The output of the command is shown in the following image:
{
"canDelegate": null,
"id": "/subscriptions/9239f519-8504-4e92-ae6f-c84d53ba3714/resourceGroups/RGP-USE-PLH-NP/providers/Microsoft.KeyVault/vaults/rakaks-Vault2/providers/Microsoft.Authorization/roleAssignments/0873a91f-5d33-4a9a-9141-14fd5a0ec689",
"name": "0873a91f-5d33-4a9a-9141-14fd5a0ec689",
"principalId": "3c29c6bc-123e-42dc-b712-a12b05c513c4",
"principalType": "ServicePrincipal",
"resourceGroup": "RGP-USE-PLH-NP",
"roleDefinitionId": "/subscriptions/9239f519-8504-4e92-ae6f-c84d53ba3714/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7",
"scope": "/subscriptions/9239f519-8504-4e92-ae6f-c84d53ba3714/resourceGroups/RGP-USE-PLH-NP/providers/Microsoft.KeyVault/vaults/rakaks-Vault2",
"type": "Microsoft.Authorization/roleAssignments"
}
Grant the service principal permissions to get secrets:
az keyvault set-policy -n 'rakaks-Vault2' --secret-permissions get --spn '6a9171ad-e645-41e0-91d3-404afe478555'
The output of the command is shown in the following image:
You've now configured your service principal with permissions to read secrets from your key vault. The $AZURE_CLIENT_SECRET is the password of your service principal.
Next, Add your service principal credentials as a Kubernetes secret that's accessible by the Secrets Store CSI driver:
root@ubuntuserver01:/home/admina# kubectl create secret generic secrets-store-creds --from-literal clientid=6a9171ad-e645-41e0-91d3-404afe478555 --from-literal clientsecret=$AZURE_CLIENT_SECRET
example:-
root@ubuntuserver01:/home/admina# kubectl create secret generic secrets-store-creds --from-literal clientid=6a9171ad-e645-41e0-91d3-404afe478555 --from-literal clientsecret=kM9ZUHT1Y.a3kdXXXXt-ouxdFZtQ09
secret/secrets-store-creds created
root@ubuntuserver01:/home/admina#
create a file named secretProviderClass.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: rakaksvault2
spec:
provider: azure
parameters:
usePodIdentity: "false" # [REQUIRED] Set to "true" if using managed identities
useVMManagedIdentity: "false" # [OPTIONAL] if not provided, will default to "false"
userAssignedIdentityID: "6a9171ad-e645-41e0-91d3-404afe478555" # [REQUIRED] If you're using a service principal, use the client id to specify which user-assigned managed identity to use. If you're using a user-assigned identity as the VM's managed identity, specify the identity's client id. If the value is empty, it defaults to use the system-assigned identity on the VM
# az ad sp show --id http://contosoServicePrincipal --query appId -o tsv
# the preceding command will return the client ID of your service principal
keyvaultName: "rakaks-Vault2" # [REQUIRED] the name of the key vault
# az keyvault show --name contosoKeyVault5
# the preceding command will display the key vault metadata, which includes the subscription ID, resource group name, key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, Azure environment will default to AzurePublicCloud
objects: |
array:
- |
objectName: "ExamplePassword" # [REQUIRED] object name
# az keyvault secret list --vault-name "contosoKeyVault5"
# the above command will display a list of secret names from your key vault
objectType: secret # [REQUIRED] object types: secret, key, or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
resourceGroup: "RGP-USE-PLH-NP" # [REQUIRED] the resource group name of the key vault
subscriptionId: "9239f519-8504-4e92-ae6f-c84d53ba3714" # [REQUIRED] the subscription ID of the key vault
tenantId: "8896b7ee-113f-4488-8fe2-05635ccbcf01" # [REQUIRED] the tenant ID of the key vault
and copy the file to \home\admina folder in ubuntu server so that we can run the command
Deploy your pod with mounted secrets from your key vault
To configure your SecretProviderClass object, run the following command:
--> kubectl apply -f secretProviderClass.yaml
Example:-
root@ubuntuserver01:/home/admina# kubectl apply -f secretProviderClass.yaml
secretproviderclass.secrets-store.csi.x-k8s.io/rakaksvault2 created
deploy your Kubernetes pods with the SecretProviderClass and the secrets-store-creds that you configured earlier
create a file named updateDeployment.yaml
# This is a sample pod definition for using SecretProviderClass and service-principal for authentication with Key Vault
kind: Pod
apiVersion: v1
metadata:
name: nginx-secrets-store-inline
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "rakaksvault2"
nodePublishSecretRef: # Only required when using service principal mode
name: secrets-store-creds # Only required when using service principal mode
and copy the file to \home\admina folder in ubuntu server so that we can run the command
--> kubectl apply -f updateDeployment.yaml
root@ubuntuserver01:/home/admina# kubectl apply -f updateDeployment.yaml
pod/nginx-secrets-store-inline created
root@ubuntuserver01:/home/admina#
Check the pod status and secret content
To display the pods that you've deployed, run the following command:-
kubectl get pods
root@ubuntuserver01:/home/admina# kubectl get pods
NAME READY STATUS RESTARTS AGE
csi-secrets-store-provider-azure-1600924880-j8j9v 1/1 Running 0 3d6h
csi-secrets-store-provider-azure-1600924880-secrets-store-4hzkx 3/3 Running 0 3d6h
hostname-v1-b797bf78-gcclq 1/1 Running 0 5d22h
hostname-v1-b797bf78-j9qzr 1/1 Running 0 5d22h
hostname-v1-b797bf78-vx44b 1/1 Running 0 5d22h
nginx-secrets-store-inline 1/1 Running 0 52m
root@ubuntuserver01:/home/admina#
To check the status of your pod, run the following command:
kubectl describe pod/nginx-secrets-store-inline
To display all the secrets that are contained in the pod, run the following command:
kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
To display the contents of a specific secret, run the following command:-
kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
root@ubuntuserver01:/home/admina# kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/ExamplePassword
hVFkk965BuUv
root@ubuntuserver01:/home/admina#
Scaling pods and Nodes:-
Applications can be scaled in multiple ways, from manual to automatic at the POD level:
You can manually define the number of pods with:
kubectl scale --replicas=5 deployment/hostname-v1
root@ubuntuserver01:/home/admina# kubectl scale --replicas=5 deployment/hostname-v1
output:-
deployment.apps/hostname-v1 scaled
kubectl get pods
output:-
NAME READY STATUS RESTARTS AGE
hostname-v1-5d7984db8b-2ssjn 1/1 Running 0 46s
hostname-v1-5d7984db8b-b4fxg 1/1 Running 0 13m
hostname-v1-5d7984db8b-lxn4g 1/1 Running 0 46s
hostname-v1-5d7984db8b-lzfz7 1/1 Running 0 46s
hostname-v1-5d7984db8b-p7nwq 1/1 Running 0 46s
Update the hostname deployment CPU requests and limits to the following:
Now we have from no-limit to limited
updated hostname.yml
========================
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostname-v1
spec:
replicas: 1
selector:
matchLabels:
app: hostname
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: hostname
version: v1
spec:
containers:
- image: rakakcourse28.azurecr.io/hostname:v1
imagePullPolicy: Always
name: hostname
resources:
requests:
cpu: 250m
limits:
cpu: 500m
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hostname
name: hostname
spec:
ports:
- nodePort: 31575
port: 80
protocol: TCP
targetPort: 80
selector:
app: hostname
sessionAffinity: None
type: LoadBalancer
kubectl apply -f hostname.yml
Now scale your app with the following:
kubectl autoscale deployment hostname-v1 --cpu-percent=50 --min=3 --max=10
You can see the status of your pods with:
kubectl get hpa
kubectl get pods
The manually set number of replicas (5) should reduce to 3 given there is minimal load on the app.
root@ubuntuserver01:/home/admina# kubectl get pods
NAME READY STATUS RESTARTS AGE
hostname-v1-b797bf78-gcclq 1/1 Running 0 5m10s
hostname-v1-b797bf78-j9qzr 1/1 Running 0 3m52s
hostname-v1-b797bf78-vx44b 1/1 Running 0 3m52s
It is also possible to change the actual k8s cluster size. During cluster creation, you can set the cluster size with the flag:
--node-count
If we didn't enable cluster-autoscale, we could manually change the pool size after creation you can change the node pool size using:
az aks scale --resource-group RGP-USE-PLH-NP --name rakAKSCluster --node-count 3
The auto-scaling needs to be done at cluster create time, as it is not possible to enable autoscaling at the moment, or to change the min and max node counts on the fly (though we can manually change the node count in our cluster).
In order to trigger an autoscale, we can first remove the POD autoscaling hpa service:
kubectl delete hpa hostname-v1
Then we can scale our PODs (we set a max of 20 per node) to 25:
kubectl delete hpa hostname-v1
kubectl scale --replicas=25 deployment/hostname-v1
After a few minutes, we should see 25 pods running across at least two if not all three nodes in our autoscale group
kubectl get pods -o wide -w
1 comment:
Impressive blog, you explain very well in this article. I was searching for this kind of information. Now days it is very important to take right information from internet. Surely i am going to share this article with my friends. Also if you want to know about server hosting, i can help you.I am very impressed with this article, all the information I have found is very good, internet is a very convenient medium in which every information can be found in depth, this blog is very important
You should know about USA VPS Hosting and how it can be important for this modern world. Thanks once again.
Buy Best USA VPS Server Hosting at Cheap & Free Support - Online Servers
https://onliveserver.com
Post a Comment