About Me

My photo
I am MCSE in Data Management and Analytics with specialization in MS SQL Server and MCP in Azure. I have over 13+ years of experience in IT industry with expertise in data management, Azure Cloud, Data-Canter Migration, Infrastructure Architecture planning and Virtualization and automation. Contact me if you are looking for any sort of guidance in getting your Infrastructure provisioning automated through Terraform. I sometime write for a place to store my own experiences for future search and read by own blog but can hopefully help others along the way. Thanks.

Validation of network acls failure: SubnetsHaveNoServiceEndpointsConfigured

 When executing command :- 

az storage account network-rule add -g RGP-USE-AKS-DV  --account-name stouserakdv --vnet-name VNT-USE-AKS-DEV --subnet SUB-USE-AKS-DEV

Getting below error:-

 (NetworkAclsValidationFailure) Validation of network acls failure: SubnetsHaveNoServiceEndpointsConfigured:Subnets sub-use-aks-dev of virtual network /subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourceGroups/RGP-USE-AKS-DV/providers/Microsoft.Network/virtualNetworks/VNT-USE-AKS-DEV do not have ServiceEndpoints for Microsoft.Storage resources configured. Add Microsoft.Storage to subnet's ServiceEndpoints collection before trying to ACL Microsoft.Storage resources to these subnets..

Code: NetworkAclsValidationFailure

Message: Validation of network acls failure: SubnetsHaveNoServiceEndpointsConfigured:Subnets sub-use-aks-dev of virtual network /subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourceGroups/RGP-USE-AKS-DV/providers/Microsoft.Network/virtualNetworks/VNT-USE-AKS-DEV do not have ServiceEndpoints for Microsoft.Storage resources configured. Add Microsoft.Storage to subnet's ServiceEndpoints collection before trying to ACL Microsoft.Storage resources to these subnets..


Explain this error:- 

This error message is related to the configuration of Network Access Control Lists (NACLs) in Microsoft Azure. NACLs are used to control inbound and outbound network traffic to and from subnets in a virtual network.

The error message indicates that the validation of the NACL configuration has failed due to a missing configuration of Service Endpoints for Microsoft.Storage in the subnets with the name "sub-use-aks-dev" of the virtual network with the name "VNT-USE-AKS-DEV".

Service Endpoints provide secure and direct connectivity to Azure services over an Azure Virtual Network. In this case, Microsoft.Storage resources require the Service Endpoints for secure access to and from subnets. The error message suggests adding Microsoft.Storage to the subnet's ServiceEndpoints collection before trying to ACL Microsoft.Storage resources to these subnets.

To resolve this error, you can follow the instructions provided in the error message and add the Microsoft.Storage Service Endpoint to the subnet's ServiceEndpoints collection by configuring it in the Azure portal or through Azure CLI. Once this configuration is complete, the NACLs can be updated to control inbound and outbound traffic for Microsoft.Storage resources on the subnets.


how to resolve

1.First, you need to identify the virtual network and subnet that are causing the error. You can use the following command to list all the subnets in the virtual network:

az network vnet subnet list --resource-group RGP-USE-AKS-DV  --vnet-name VNT-USE-AKS-DEV --output table

AddressPrefix    Name             PrivateEndpointNetworkPolicies    PrivateLinkServiceNetworkPolicies    ProvisioningState    ResourceGroup

---------------  ---------------  --------------------------------  -----------------------------------  -------------------  ---------------

10.20.0.0/24     SUB-USE-AKS-DEV  Disabled                          Enabled                              Succeeded            RGP-USE-AKS-DV


2.Once you have identified the subnet causing the error, you can use the following command to add the Microsoft.Storage Service Endpoint to the subnet's ServiceEndpoints collection:

az network vnet subnet update --name  SUB-USE-AKS-DEV --resource-group RGP-USE-AKS-DV --vnet-name VNT-USE-AKS-DEV --service-endpoints Microsoft.Storage


Then I executed below command and it has been executed successfully.

az storage account network-rule add -g RGP-USE-AKS-DV  --account-name stouserakdv --vnet-name VNT-USE-AKS-DEV --subnet SUB-USE-AKS-DEV


Troubleshooting Failed Volume Mounting in Kubernetes: Analysis of 'MountVolume.MountDevice' Error"


Issue:-

mounting a file share to a Kubernetes pod using the CIFS protocol. Specifically, the mount command failed with an exit status of 32, indicating that the mount operation encountered an error.


The output of the mount command indicates that the mount failed due to a permission denied error. This suggests that the Kubernetes pod is not authorized to access the Azure file share.

Error :-

Warning  FailedMount       31s (x7 over 63s)  kubelet            MountVolume.MountDevice failed for volume "pv-azure-file" : rpc error: code = Internal desc = volume(#azure-secret#sqlfileshare#pv-azure-file#rak) mount //stouserakdv.file.core.windows.net/sqlfileshare on /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/163555c3cf3f4f1287da205ef7c22fcf1aa64bf6ebec8c90be214d84135e6ded/globalmount failed with mount failed: exit status 32

Mounting command: mount

Mounting arguments: -t cifs -o gid=10001,file_mode=0777,dir_mode=0777,actimeo=30,mfsymlinks,<masked> //stouserakdv.file.core.windows.net/sqlfileshare /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/163555c3cf3f4f1287da205ef7c22fcf1aa64bf6ebec8c90be214d84135e6ded/globalmount

Output: mount error(13): Permission denied 


Problem : Pod not coming up

This error message indicates that the Kubernetes volume mount has failed due to a permission denied error while attempting to mount a file share from the Azure Storage account. There could be various reasons for this error, such as incorrect credentials, missing dependencies, or incorrect access settings.

Here are some things you can try to resolve the issue:

Check the credentials: Ensure that the storage account name, file share name, and access key are correct. You can validate the credentials by running the mount command manually on the node.

Check the access settings: Ensure that the file share has been configured to allow access from the Kubernetes node's IP address or subnet. You can check this by reviewing the networking settings in the Azure portal.

Check dependencies: Ensure that the necessary dependencies are installed on the Kubernetes node. For example, if the node is running Ubuntu, ensure that the cifs-utils package is installed.

Check permissions: Ensure that the Kubernetes node has permission to mount the file share. You can check this by reviewing the access control settings in the Azure portal.

Check the logs: Check the Kubernetes logs to see if there are any additional error messages that could provide further insight into the root cause of the issue.

By troubleshooting these issues, you can hopefully resolve the permission denied error and mount the Azure file share successfully.

Observed secret for fileshare was not correct. 

somebody rotated the accesskey of the Storage account, I later updated the storage accountkey in the azure-secret and then restarted the pod and pod started running successfully.




Step-by-Step Guide to Create an AKS Cluster with Azure Active Directory and Role-Based Access Control & Managing Azure Networking and Storage using Azure CLI commands

SQL Server in Azure Kubernetes Service (AKS)


SQL Server on Kubernetes with persistent volumes is similar to a SQL Server failover cluster instance with shared disk in the sense that it provides high availability and resiliency against failure at the container or node level.

In a Kubernetes cluster with persistent volumes, the database is stored on a persistent volume that is decoupled from the pod. This means that if a pod fails or is terminated, the persistent volume remains intact, and a new pod can be created on the same or a different node and the persistent volume can be attached to the new pod. This ensures that the data is not lost and the database can continue to operate without interruption.

In case of node failure, Kubernetes automatically detects the failure and promotes a new replica set to replace the failed node. The persistent volume is then attached to a new pod on the new node, and the database can continue to operate without interruption.

Overall, SQL Server on Kubernetes with persistent volumes provides a highly available and scalable solution for running SQL Server in a containerized environment.

 ~~~~~~~~~~1st Command (AZ Login)~~~~~~~~~~~~

 AZ Login

~~~~~~~~~~ 2nd Command(Create a resource group)~~~~~~~~~~~~~~~

az group create --name RGP-USE-RAK-AKS --location eastus

~~~3rd Command (Create a VNET and Subnet for AKS Cluster Creation)~~~~~~~

az network vnet create --resource-group RGP-USE-RAK-AKS --name Vnt-USE-RAK-AKS --address-prefix 10.20.0.0/16 --subnet-name sub-use-rak-aks --subnet-prefix 10.20.0.0/24

~~~~~~~~~~~4th Command to create a basic AKS Cluster~~~~~~~~~~~~~

az aks create --resource-group RGP-USE-RAK-AKS --name AKS-USE-RAK-DEV --node-count 2 --generate-ssh-keys --network-plugin kubenet --network-policy calico --vnet-subnet-id /subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourceGroups/RGP-USE-RAK-AKS/providers/Microsoft.Network/virtualNetworks/Vnt-USE-RAK-AKS/subnets/sub-use-rak-aks

~~~~~~~~~~~5th Command~~~~~~~~~~~~~~~

az role assignment create --assignee-object-id 4deb7b66-2ab4-4d17-b3d5-4d63d195d8db --scope /subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourceGroups/RGP-USE-RAK-AKS/providers/Microsoft.Network/virtualNetworks/Vnt-USE-RAK-AKS/subnets/sub-use-rak-aks --role "Network Contributor" --assignee-principal-type ServicePrincipal

Explanation:-

  • This command creates a role assignment in the specified subscription and resource group, granting the role of "Network Contributor" to the specified Service Principal (identified by its Azure AD object ID) at the specified scope, which is a subnet in a virtual network.
  • A role assignment is a mapping between a security principal and a role definition. In this case, the security principal is a Service Principal identified by its Azure AD object ID, and the role definition is "Network Contributor". The "Network Contributor" role allows the principal to manage network resources, such as virtual networks and subnets, but not other types of resources like virtual machines or storage accounts.
  • The --scope option specifies the scope of the role assignment. In this case, the role is assigned at the subnet level of the virtual network identified by /subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourceGroups/RGP-USE-RAK-AKS/providers/Microsoft.Network/virtualNetworks/Vnt-USE-RAK-AKS/subnets/rakAksSubnet.
  • The --assignee-principal-type option specifies the type of the security principal being assigned the role, which is a Service Principal in this case. The --assignee-object-id option specifies the object ID of the Service Principal that the role is being assigned to.
  • Note that to successfully create a role assignment, the account used to run the az role assignment create command must have the appropriate permissions to assign the specified role at the specified scope. In this case, the account must have the "Owner" role at the subscription level or have been granted the appropriate RBAC permissions on the resource group and virtual network to be able to assign the "Network Contributor" role to the specified Service Principal.

~~~~~~~~~~~~~~~~6th Command to  enables Azure Active Directory (AAD)~~~~~~~~~~~~~

az aks update -g RGP-USE-RAK-AKS -n AKS-USE-RAK-DEV --enable-aad --aad-tenant-id 1c5558a6-XXX-4a35-8463-7592105355ff --aad-admin-group-object-ids 0725f885-f90f-4889-9d29-e86e80XXX782

Explanation:-

  • This command enables Azure Active Directory (AAD) integration for the AKS cluster named "AKS-USE-RAK-DEV" in the resource group "RGP-USE-RAK-AKS".
  • The --enable-aad option enables AAD integration for the cluster, and 
  • the --aad-tenant-id option specifies the ID of the AAD tenant that will be associated with the cluster. 
  • The --aad-admin-group-object-ids option specifies the object ID of the AAD group that will be granted administrative access to the cluster. This group will be added as a cluster admin with the clusterAdmin Azure RBAC role.

~~~~~7th Command ~~enables Azure Role-Based Access Control (Azure RBAC)~~~~

az aks update -g RGP-USE-RAK-AKS -n AKS-USE-RAK-DEV --enable-azure-rbac

Explanation: -

  • This command enables Azure Role-Based Access Control (Azure RBAC) for the AKS cluster named "RGP-USE-RAK-AKS" in the resource group "RGP-USE-RAK-AKS".
  • The --enable-azure-rbac option enables Azure RBAC for the cluster, which allows you to assign Azure AD users and groups to RBAC roles in the cluster. With Azure RBAC, you can control access to cluster resources and actions based on the RBAC roles assigned to users and groups.
  • Note that before enabling Azure RBAC, you need to enable AAD integration for the cluster, as Azure RBAC relies on AAD for user and group authentication and authorization.

~~~~~~~~~7th Command Get your AKS Resource ID~~~~~~~~~~~~~~~~~

az aks show -g RGP-USE-RAK-AKS -n rakAksSubnet --query id -o tsv)

~~~~8th Command ~~~ Azure Kubernetes Service RBAC Admin to group 0725f885-f90f-4889-9d29-e86e808ce782 ~~~~~

az role assignment create --role "Azure Kubernetes Service RBAC Admin" --assignee 0725f885-f90f-4889-9d29-e86e808ce782 --scope "/subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourcegroups/RGP-USE-RAK-AKS/providers/Microsoft.ContainerService/managedClusters/rakAksSubnet"

Explanation:-

This command creates a role assignment in Azure that grants the "Azure Kubernetes Service RBAC Admin" role to the principal with the object ID "0725f885-f90f-4889-9d29-e86e808ce782" on the specified Azure Kubernetes Service (AKS) cluster. The scope of the role assignment is set to the resource ID of the AKS cluster, which is "/subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourcegroups/RGP-USE-RAK-AKS/providers/Microsoft.ContainerService/managedClusters/rakAksSubnet".

To break this down:

az role assignment create: This command creates a new role assignment in Azure.

--role "Azure Kubernetes Service RBAC Admin": This option specifies the role to assign to the principal. In this case, the role is "Azure Kubernetes Service RBAC Admin", which is a built-in role in Azure that allows the principal to manage Kubernetes RBAC (Role-Based Access Control) on an AKS cluster.

--assignee 0725f885-f90f-4889-9d29-e86e808ce782: This option specifies the object ID of the principal to whom the role will be assigned. In this case, the object ID is "0725f885-f90f-4889-9d29-e86e808ce782".

--scope "/subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourcegroups/RGP-USE-RAK-AKS/providers/Microsoft.ContainerService/managedClusters/rakAksSubnet": This option specifies the scope of the role assignment, which is the resource ID of the AKS cluster to which the role will be assigned. The resource ID includes the subscription ID, the resource group name, the resource provider (Microsoft.ContainerService), and the name of the AKS cluster (rakAksSubnet).

Overall, this command is granting the "Azure Kubernetes Service RBAC Admin" role to the specified principal on the specified AKS cluster, giving them the ability to manage RBAC for that cluster.


az role assignment create --role "Azure Kubernetes Service RBAC Cluster Admin" --assignee 0725f885-f90f-4889-9d29-e86e808ce782 --scope "/subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourcegroups/RGP-USE-RAK-AKS/providers/Microsoft.ContainerService/managedClusters/rakAksSubnet"


Azure Kubernetes Service RBAC Cluster Admin

  • When you run az aks get-credentials, it uses the Azure CLI to retrieve the cluster credentials and then saves them to your local machine. This method may use a different authentication method that does not require kubelogin.
  • However, when you try to use kubelogin, it expects to be able to authenticate to the cluster using the Kubernetes API server. If kubelogin is not installed or not properly configured, you may see the error message you mentioned.

To use kubelogin, you will need to ensure that it is properly installed and configured. You can check whether kubelogin is installed by running the command which kubelogin in your terminal. If kubelogin is not installed, you can download and install it by following the instructions in the documentation: https://github.com/Azure/kubelogin

Once kubelogin is properly installed, you may need to configure it to use the correct authentication method for your cluster. You can find more information on how to configure kubelogin in the documentation as well.

Deploying and Securing a Private Endpoint for Azure Storage with Azure Kubernetes Service (AKS)

az network vnet subnet create --name pip-subnet --resource-group RGP-USE-RAK-AKS --vnet-name Vnt-USE-RAK-AKS --address-prefix 10.20.2.0/24

az storage account create --name stouserakdv --resource-group RGP-USE-RAK-AKS --location eastus --sku Standard_LRS --kind StorageV2 --hns true --access-tier Hot --default-action Allow --allow-blob-public-access false 

az network private-endpoint create -g RGP-USE-RAK-AKS -n PEP-stouserakdv --vnet-name Vnt-USE-RAK-AKS --subnet pip-subnet --private-connection-resource-id "/subscriptions/69b34dfc-4b97-4259-93f3-037ed7eec25e/resourceGroups/RGP-USE-RAK-AKS/providers/Microsoft.Storage/storageAccounts/stouserakdv" --connection-name tttt -l eastus --group-ids file

az network private-endpoint show --name PEP-stouserakdv --resource-group RGP-USE-RAK-AKS

az network private-dns zone create --name file.core.windows.net --resource-group RGP-USE-RAK-AKS

az storage account update --name stouserakdv --resource-group RGP-USE-RAK-AKS --default-action Deny

az network vnet subnet update --resource-group RGP-USE-RAK-AKS --vnet-name Vnt-USE-RAK-AKS --name pip-subnet --service-endpoints Microsoft.Storage

az storage account network-rule add -g RGP-USE-RAK-AKS  --account-name stouserakdv --vnet-name  Vnt-USE-RAK-AKS --subnet pip-subnet


az network vnet subnet update --resource-group RGP-USE-RAK-AKS --vnet-name Vnt-USE-RAK-AKS --name sub-use-rak-aks --service-endpoints Microsoft.Storage

az storage account network-rule add -g RGP-USE-RAK-AKS  --account-name stouserakdv --vnet-name  Vnt-USE-RAK-AKS --subnet sub-use-rak-aks

az network private-dns record-set a add-record --resource-group RGP-USE-RAK-AKS --zone-name file.core.windows.net --ipv4-address "10.20.2.4" --record-set-name stouserakdv

az network private-dns link vnet create -g RGP-USE-RAK-AKS -n MyDNSLink -z file.core.windows.net -v Vnt-USE-RAK-AKS  -e false


kubectl debug node/aks-nodepool1-83490439-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0


root@aks-nodepool1-83490439-vmss000000:/# chroot /host


#nslookup stouserakdv.file.core.windows.net

Server:         168.63.129.16

Address:        168.63.129.16#53


Non-authoritative answer:

Name:   stouserakdv.file.core.windows.net

Address: 10.20.2.4


it is returning private IP address of storage account. 


C:\Users\kusha>

az storage share create --account-name stouserakdv --account-key VkXiopaXXXX7k9sEGaHurt4h0hQmwy5ykP6YQ1wjaUxE/ndZSkrQC6ZV7zzUs/znnHTrbHOzZZ7l+ASt9JLYBw== --name sqlfileshare


kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=stouserakdv --from-literal=azurestorageaccountkey=VkXiopaDg3x7k9sEGaHurt4h0hXXXwy5ykP6YQ1wjaUxE/ndZSkrQC6ZV7zzUs/znnHTrbHOzZZ7l+ASt9JLYBw== -n rak

Kubernetes YAML file for Provisioning an Azure File Storage Persistent Volume with ReadWriteMany Access Mode

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-azure-file
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  azureFile:
    secretName: azure-secret
    shareName:  sqlfileshare
    readOnly: false

~~~~~~~~~~~~~~~~~~~~~~~~~~

apiVersion: v1

kind: PersistentVolumeClaim
metadata:
  name: pvc-azure-file
  namespace: rak
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  volumeName: pv-azure-file
resources:
    requests:
      storage: 5Gi    

---

  • This is Kubernetes YAML file that specifies the configuration for a Persistent Volume Claim (PVC) object.
  • apiVersion: v1 indicates that this file uses the Kubernetes core API version 1.
  • kind: PersistentVolumeClaim specifies the type of Kubernetes object being created.
  • metadata defines the name and namespace for the PVC. In this case, the PVC is named pvc-azure-file and is created in the rak namespace.
  • accessModes: specifies the access mode for the PVC, which is ReadWriteMany in this case. This means that the volume can be mounted as read-write by multiple pods.
  • storageClassName: specifies the storage class to use for dynamic provisioning. Since this is an existing PV, this is set to an empty string.
  • volumeName: specifies the name of the PV that this PVC will be bound to, which is pv-azure-file in this case.
  • resources: specifies the storage request for the PVC. In this case, the PVC is requesting 5Gi of storage. When the PVC is created, it will be bound to the existing PV with a capacity of 20Gi.


kubectl create secret generic mssql --from-literal=MSSQL_SA_PASSWORD="MyC0m9l&xP@ssw0rd" -n rak


apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
  namespace: rak
spec:
  replicas: 1
  selector:
     matchLabels:
       app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      hostname: mssqlinst
      securityContext:
        fsGroup: 10001
      containers:
      - name: mssql
        image: mcr.microsoft.com/mssql/server:2019-latest
        resources:
          requests:
            memory: "1G"
            cpu: "1000m"
          limits:
            memory: "1G"
            cpu: "1500m"
        ports:
        - containerPort: 1433
        env:
        - name: MSSQL_PID
          value: "Developer"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: MSSQL_SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pvc-azure-file

---

apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
  namespace: rak
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer


Unable to connect to the server: getting credentials: exec: executable kubelogin not found

Issue:- Unable to connect to the server: getting credentials: exec: executable kubelogin not found

C:\Users\kusha>kubectl get deployments --all-namespaces=true

Unable to connect to the server: getting credentials: exec: executable kubelogin not found


It looks like you are trying to use a client-go credential plugin that is not installed.


To learn more about this feature, consult the documentation available at:

      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

Resolution:- run az aks install-cli from command prompt using local Administrator


C:\Users\kusha>az aks install-cli

The detected architecture is 'amd64', which will be regarded as 'amd64' and the corresponding binary will be downloaded. If there is any problem, please download the appropriate binary by yourself.

Downloading client to "C:\Users\kusha\.azure-kubectl\kubectl.exe" from "https://storage.googleapis.com/kubernetes-release/release/v1.26.1/bin/windows/amd64/kubectl.exe"

The installation directory "C:\Users\kusha\.azure-kubectl" has been successfully appended to the user path, the configuration will only take effect in the new command sessions. Please re-open the command window.

Downloading client to "C:\Users\kusha\AppData\Local\Temp\tmp7oi643ok\kubelogin.zip" from "https://github.com/Azure/kubelogin/releases/download/v0.0.26/kubelogin.zip"

The installation directory "C:\Users\kusha\.azure-kubelogin" has been successfully appended to the user path, the configuration will only take effect in the new command sessions. Please re-open the command window.


C:\Users\kusha>


C:\Windows\System32>az logout


C:\Windows\System32>az login


C:\Windows\System32>az account set --subscription 69b34dfc-4b97-XXXX-93f3-037ed7eec25e


C:\Windows\System32>az aks get-credentials --resource-group rakResourceGroup --name myAKSCluster

Merged "myAKSCluster" as current context in C:\Users\kusha\.kube\config


C:\Windows\System32>kubectl get deployments --all-namespaces=true

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code DGPGMEDE4 to authenticate.

NAMESPACE         NAME                      READY   UP-TO-DATE   AVAILABLE   AGE

calico-system     calico-kube-controllers   1/1     1            1           171m

calico-system     calico-typha              1/1     1            1           171m

kube-system       coredns                   2/2     2            2           173m

kube-system       coredns-autoscaler        1/1     1            1           173m

kube-system       konnectivity-agent        2/2     2            2           173m

kube-system       metrics-server            2/2     2            2           173m

tigera-operator   tigera-operator           1/1     1            1           173m


C:\Windows\System32>

Explanation:-

When you run az aks get-credentials, it uses the Azure CLI to retrieve the cluster credentials and then saves them to your local machine. This method may use a different authentication method that does not require kubelogin.


However, when you try to use kubelogin, it expects to be able to authenticate to the cluster using the Kubernetes API server. If kubelogin is not installed or not properly configured, you may see the error message mentioned.

To use kubelogin, you will need to ensure that it is properly installed and configured. You can check whether kubelogin is installed by running the command which kubelogin in your terminal. If kubelogin is not installed, you can download and install it by following the instructions in the documentation: https://github.com/Azure/kubelogin

Once kubelogin is properly installed, you may need to configure it to use the correct authentication method for your cluster. You can find more information on how to configure kubelogin in the documentation as well.

Understanding the Role of Key Components in Kubernetes Architecture

 Question

When you use kubectl apply to submit a declarative configuration, the configuration is sent to the Kubernetes 1)_______, which is the primary control plane component of the Kubernetes cluster. 

The API server validates and stores the configuration in 2)____, the Kubernetes cluster's distributed key-value store. 

Other components of the cluster, such as the 3)______ and the 4)_____ , then read the configuration from etcd and use it to manage the state of the cluster.

In Kubernetes, the primary communication hub and central control point for the cluster is the Kubernetes API server. The API server is responsible for validating and processing API requests, and acts as the interface between the Kubernetes control plane and the rest of the cluster.

All other Kubernetes components, such as the kubelet, scheduler, and controller manager, communicate with the API server to get information about the desired state of the cluster, and to report the current state of the nodes, pods, and other resources in the cluster. The API server stores the state of the cluster and its resources in the etcd datastore, and maintains a watch on the state of the cluster to detect changes and update the relevant components as needed.

The 5)______ exposes a RESTful API that can be accessed by both internal and external clients, including the kubectl command-line tool and other automation and orchestration tools. Clients can use the API to create, update, and delete resources in the cluster, and to monitor the state of the resources.

Overall, the Kubernetes API server is a critical component of the cluster, responsible for managing the state of the cluster and enabling communication between the various components that make up the Kubernetes control plane and the nodes in the cluster.

Ans:- 1) API server 2) etcd 3) Scheduler 4) Control Manager 5) API server

Question :- In Kubernetes, 1)_____ are the basic building blocks that represent the state of the cluster. They define the desired state of the resources that the cluster should manage, such as pods, services, replication controllers, and more. API objects are defined using YAML or JSON manifests that specify the object's properties and their desired values.

Each API object has a specific kind, such as Pod, Service, Deployment, and so on, and is represented by a unique name within the cluster. The Kubernetes API server is responsible for managing and storing the state of these objects and ensuring that the actual state of the cluster matches the desired state specified by the API objects.

API objects can be created, updated, and deleted using the kubectl command-line tool or by making API requests to the Kubernetes API server. The Kubernetes API also provides programmatic access to the state of the cluster, allowing developers to build custom tools and applications that interact with the cluster's resources.

Ans: 1.API objects 

Question :- In Kubernetes, the responsibility of managing the state of a Pod and its associated containers is delegated to the Kubernetes 1)______ running on the node where the Pod is scheduled. The kubelet is responsible for starting, stopping, and monitoring the containers that belong to the Pod, and for reporting the state of the Pod and its containers back to the Kubernetes control plane.

The kubelet is informed of the desired state of the Pod through the Kubernetes API server, which sends a Pod specification to the kubelet that describes the desired state of the Pod, including which containers should be running, how they should be configured, and any other requirements.

The kubelet then takes actions to ensure that the Pod's actual state matches the desired state, such as starting or stopping containers as needed.


Additionally, the kubelet is responsible for monitoring the health of the containers in the Pod and reporting their status back to the Kubernetes control plane. If a container fails or becomes unresponsive, the kubelet can take action to restart the container or the entire Pod, depending on the configuration.


Overall, the kubelet is a critical component in the Kubernetes architecture, responsible for ensuring that the containers running in a Pod are healthy and running as expected.

Ans: kubelet agent

Question : In Kubernetes, the 1)_______ is a component of the control plane that is responsible for running controllers, which are background processes that watch the state of the cluster and take action to bring it closer to the desired state.

The purpose of the Controller Manager is to ensure that the desired state of the cluster is maintained, by constantly monitoring the state of the resources in the cluster and taking actions to reconcile any differences between the actual state and the desired state. The Controller Manager runs several built-in controllers, including the Replication Controller, ReplicaSet Controller, Deployment Controller, StatefulSet Controller, and DaemonSet Controller.

The Replication Controller, for example, is responsible for ensuring that the specified number of replicas of a pod is running at all times. If a pod fails, the Replication Controller will create a new replica to replace it. The Deployment Controller is responsible for managing the rollout of new versions of an application, ensuring that a specified number of replicas of the new version are running and that the rollout is performed in a controlled, gradual manner.

The Controller Manager also allows custom controllers to be developed and deployed to the cluster, allowing for customized automation of a wide range of tasks.

Overall, the Controller Manager plays a critical role in maintaining the desired state of the cluster, automating the management of resources, and ensuring the reliable and efficient operation of Kubernetes.

Ans: Controller Manager

Question: The default port number for the Kubernetes API Server is ____. The Kubernetes API Server uses ____ as the transport protocol.

Answer : 1) 6443 2) TCP

Question:-

A 1)______ manifest is used in Kubernetes to define and manage a set of pods that are created and managed directly by the kubelet on a specific node, rather than by the Kubernetes API server. These pods are typically used for system daemons and other critical infrastructure components that need to be run on every node in a Kubernetes cluster. The manifest is stored in a file on the node's local filesystem and can be managed using standard configuration management tools.

2)_____ Manifests are often used for bootstrapping pods on a cluster before other Kubernetes services, such as the API server or controller manager, are available. Since static pods are managed by the kubelet running on the node, they can be started as soon as the node is up and running, without waiting for other cluster components to become available. This can be particularly useful for critical system components that need to be up and running as soon as possible, such as network plugins or cluster monitoring agents.

Answer: 1) static pod 2) Static Pod


Docker Port Mapping: Avoiding Conflicts When Running Multiple Containers

Question:-

Would the following two commands create a port conflict error with each other?

       docker container run -p 80:80 -d nginx

        docker container run -p 8080:80 -d nginx

Ans:-

No, the two commands will not create a port conflict error. Docker maps the container's ports to the host's ports, and allows multiple containers to map the same host port to different container ports.

In this example, the first command maps the container's port 80 to the host's port 80, and the second command maps the container's port 80 to the host's port 8080. These mappings are different, so they will not cause a conflict.

You can run multiple containers that map the same container port to different host ports, and they will not interfere with each other. This allows you to run multiple instances of the same service on different host ports, which can be useful for testing, development, and other purposes.


Question:-

I ran 'docker container run -p 80:80 nginx' and my command line is gone and everything looks frozen. Why?

Ans:-

It's likely that the container is running in the foreground and has taken over the terminal, preventing you from entering new commands. By default, Docker runs containers in the foreground and attaches to the terminal, which can cause the terminal to appear "frozen" if the container is running an interactive process.

To regain control of the terminal, you can use the Ctrl + C key combination to stop the container. If the container is still running in the background, you can use the docker container stop command to stop it, followed by the container ID or name.

--->    docker container stop <container_id_or_name>

If you want to run the container in the background, you can use the -d or --detach option when starting the container. This will run the container in the background and return control of the terminal to you.

  ---> docker container run -d -p 80:80 nginx