About Me

My photo
I am an MCSE in Data Management and Analytics, specializing in MS SQL Server, and an MCP in Azure. With over 19+ years of experience in the IT industry, I bring expertise in data management, Azure Cloud, Data Center Migration, Infrastructure Architecture planning, as well as Virtualization and automation. I have a deep passion for driving innovation through infrastructure automation, particularly using Terraform for efficient provisioning. If you're looking for guidance on automating your infrastructure or have questions about Azure, SQL Server, or cloud migration, feel free to reach out. I often write to capture my own experiences and insights for future reference, but I hope that sharing these experiences through my blog will help others on their journey as well. Thank you for reading!

Multicontainer pod with volume of emptytype Vs one pod with multiple PVC attached.

 kushagrarakesh/multicontainers-empdir.yaml at main · kushagrarakesh/kushagrarakesh (github.com)

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: multicontainer-empdir
spec:
  volumes:
  - name: var-logs
    emptyDir: {}
  containers:
  - image: busybox
    name: busybox1
    args:
    - bin/sh
    - -c
    - ls; sleep 3600
    volumeMounts:
    - name: var-logs
      mountPath: /usr/share/nginx/htm        
    resources: {}
  - image: alpine:latest
    name: alpine
    args:
    - bin/sh
    - -c
    - ls; sleep 3600
    volumeMounts:
    - name: var-logs
      mountPath: /usr/share/nginx/htm        
    resources: {}
  - image: nginx:latest
    name: nginx
    args:
    - bin/sh
    - -c
    - ls; sleep 3600
    volumeMounts:
    - name: var-logs
      mountPath: /usr/share/nginx/htm         
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}


use cases:- 


PS C:\Users\kusha\chap5> kubectl exec  -it  multicontainer-empdir -c nginx   -- sh 

W1028 11:23:05.759993    6300 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

# cd /usr/share/nginx/htm

# pwd

/usr/share/nginx/htm

#

This script is setting up a deployment for a SQL Server instance in a Kubernetes cluster with persistent storage using Azure Disk.

The script defines a StorageClass named azure-disk which will provision Azure disks for the Kubernetes cluster. Two PersistentVolumeClaims (PVCs) are defined, one for the mssql data folder and another for the mssql log folder. 

Each PVC specifies that it will use the azure-disk StorageClass and requests a specific amount of storage.

The Deployment definition includes two volume mounts, one for the mssql data folder and one for the mssql log folder. The mssqldb volume mount is mapped to the mssql-data PVC and the mssqllog volume mount is mapped to the mssql-log PVC. The SQL Server container in the deployment will have access to both of these volume mounts.

A Service is defined to expose the SQL Server deployment on port 1433 using a LoadBalancer type service. Finally, a Secret named mssql is defined that contains the SQL Server SA password encoded in base64.

---

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
     name: azure-disk
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: Managed
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mssql-data
  annotations:
    volume.beta.kubernetes.io/storage-class: azure-disk
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mssql-log
  annotations:
    volume.beta.kubernetes.io/storage-class: azure-disk
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
     matchLabels:
       app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      hostname: mssqlinst
      securityContext:
        fsGroup: 10001
      containers:
      - name: mssql
        image: mcr.microsoft.com/mssql/server:2019-latest
        resources:
          requests:
            memory: "2G"
            cpu: "2000m"
          limits:
            memory: "2G"
            cpu: "2000m"
        ports:
        - containerPort: 1433
        env:
        - name: MSSQL_PID
          value: "Developer"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: MSSQL_SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssqldata
        - name: mssqllog
          mountPath: /var/opt/mssqllog          
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data
      - name: mssqllog
        persistentVolumeClaim:
          claimName: mssql-log          

---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer
---
apiVersion: v1
data:
  MSSQL_SA_PASSWORD: TXlDMG05bCZ4UEBzc3cwcmQ=
kind: Secret
metadata:
  creationTimestamp: null
  name: mssql


Verify the services are running. Run the following command:

Use Case :- 

get the service Ip address 

kubectl get services 

C:\Users\kusha>kubectl get services

NAME                                             TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE

mssql-deployment                                 LoadBalancer   10.0.130.195   20.85.151.114   1433:30264/TCP               66m


You can use the following applications to connect to the SQL Server instance.


  • SQL Server Managed Studio (SSMS)
  • SQL Server Data Tools (SSDT)
  • Azure Data Studio
  • Connect with sqlcmd

To connect with sqlcmd, run the following command:

Windows Command Prompt

sqlcmd -S  20.85.151.114  -U sa -P "MyC0m9l&xP@ssw0rd"

Replace the following values:

<External IP Address> with the IP address for the mssql-deployment service

MyC0m9l&xP@ssw0rd with your complex password

How to create multicontainer pod in AKS

Multi container pod example: -

kushagrarakesh/multicontainerpodexample.yaml at main · kushagrarakesh/kushagrarakesh (github.com)

kubectl create ns chap5

kubectl apply -f multicontinerpodexample.yaml -n chap5


kubectl get pods -n chap5

kubectl describe pod multicontainer -n chap5


PS C:\Users\kusha\chap5> kubectl logs multicontainer  -n chap5

W1028 11:09:06.926695   20740 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

Defaulted container "busybox1" out of: busybox1, alpine, nginx

bin

dev

etc

home

proc

root

sys

tmp

usr

var

PS C:\Users\kusha\chap5> 


this means, it is default showing log of container busybox1  out of: busybox1, alpine, nginx

if you want to see log of specific container specify like below:-


S C:\Users\kusha\chap5> kubectl logs multicontainer -c alpine   -n chap5

W1028 11:10:38.540677   18808 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

bin

dev

etc

home

lib

media

mnt

opt

proc

root

run

sbin

srv

sys

tmp

usr

var

PS C:\Users\kusha\chap5> 



Create an ingress controller in Azure Kubernetes Service (AKS)

                                           Ingress Controller

An ingress controller is a piece of software that runs within a Kubernetes cluster and listens for incoming HTTP and HTTPS traffic. 

It is responsible for routing traffic from the external internet to the appropriate service within the cluster. 

Ingress controllers are implemented as pods within the cluster, and they use a set of rules defined in an ingress resource to determine how traffic should be routed. 

Ingress controllers are commonly used to expose services to the internet and to load balance traffic to multiple replicas of a service.

 Some examples of ingress controllers include NGINX, HAProxy, and Envoy.

To create an ingress controller in Azure Kubernetes Service (AKS), you will need to perform the following steps:


Deploy an AKS cluster: If you don't already have an AKS cluster, you will need to create one. You can do this using the Azure portal, Azure CLI, or Azure PowerShell.


Install the NGINX Ingress Controller: The NGINX Ingress Controller is an open source ingress controller that you can use to expose your AKS services to the internet. To install it, you will need to create a Kubernetes deployment that installs the NGINX Ingress Controller pods and associated resources on your AKS cluster.

Create an ingress resource: An ingress resource is a Kubernetes resource that defines how external traffic should be routed to the services in your AKS cluster. To create an ingress resource, you will need to define the rules for routing traffic to your services using YAML files and then apply them to your AKS cluster using the kubectl command line tool.

Expose your services: Once you have created an ingress resource, you can use it to expose your AKS services to the internet by creating an Azure Load Balancer and associating it with your ingress resource. This will allow external traffic to be routed to your services using the rules defined in your ingress resource.

1) Run in the Bash shell - below script will import image from registry.k8s.io to your ACR

Explanation:-

This is a series of command lines in the Azure CLI (Command Line Interface) that are used to import images from a source container registry (in this case, "registry.k8s.io") to a destination container registry (in this case, "mywpaaksacr"). 

The specific images being imported are: "ingress-nginx/controller" with tag "v1.4.0", "ingress-nginx/kube-webhook-certgen" with tag "v20220916-gd32f8c343", and "defaultbackend-amd64" with tag "1.5". 

These images are being imported using the az acr import command, which is used to import images from another container registry.

 The --name flag specifies the name of the destination container registry, and the --source and --image flags specify the source and destination images, respectively.


REGISTRY_NAME=mywpaaksacr

SOURCE_REGISTRY=registry.k8s.io

CONTROLLER_IMAGE=ingress-nginx/controller

CONTROLLER_TAG=v1.4.0

PATCH_IMAGE=ingress-nginx/kube-webhook-certgen

PATCH_TAG=v20220916-gd32f8c343

DEFAULTBACKEND_IMAGE=defaultbackend-amd64

DEFAULTBACKEND_TAG=1.5

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG

---

2) Below script will create ingress-nginx repository to the local and install nginx ingress controller to the AKS clsuter

# Add the ingress-nginx repository

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Set variable for ACR location to use for pulling images

ACR_URL=mywpaaksacr.azurecr.io

# Use Helm to deploy an NGINX ingress controller

helm install nginx-ingress ingress-nginx/ingress-nginx \

    --version 4.3.0 \

    --namespace ingress-basic \

    --create-namespace \

    --set controller.replicaCount=2 \

    --set controller.nodeSelector."kubernetes\.io/os"=linux \

    --set controller.image.registry=$ACR_URL \

    --set controller.image.image=$CONTROLLER_IMAGE \

    --set controller.image.tag=$CONTROLLER_TAG \

    --set controller.image.digest="" \

    --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \

    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \

    --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \

    --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \

    --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \

    --set controller.admissionWebhooks.patch.image.digest="" \

    --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \

    --set defaultBackend.image.registry=$ACR_URL \

    --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \

    --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \

    --set defaultBackend.image.digest=""

you will get below error since incase you have not given AcrPull permission and attach ACR to AKS 

Events:

  Type     Reason     Age                  From               Message

  ----     ------     ----                 ----               -------

  Normal   Scheduled  10m                  default-scheduler  Successfully assigned ingress-basic/nginx-ingress-ingress-nginx-admission-create-6psn6 to aks-agentpool-36778052-vmss000000

  Normal   Pulling    9m3s (x4 over 10m)   kubelet            Pulling image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1"

  Warning  Failed     9m3s (x4 over 10m)   kubelet            Failed to pull image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1": rpc error: code = Unknown desc = failed to pull and unpack image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1": failed to resolve reference "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized

  Warning  Failed     9m3s (x4 over 10m)   kubelet            Error: ErrImagePull

  Warning  Failed     8m51s (x6 over 10m)  kubelet            Error: ImagePullBackOff

  Normal   BackOff    19s (x43 over 10m)   kubelet            Back-off pulling image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1"


In order to resolve this error:-

1.acr pull  and 2. reader.

CLUSTER_RESOURCE_ID=$(az aks show --name aks-use-spoke-dv  --resource-group RGP-USE-rakesh-DV  --query id --output tsv)

SP_OBJECT_ID=$(az resource show --id $CLUSTER_RESOURCE_ID --api-version 2022-11-01 --query identity.principalId --output tsv)

az role assignment create --assignee $SP_OBJECT_ID --role acrpull --scope /subscriptions/feff46f9-dc97-49d2-8b37-1a3568022795/resourceGroups/RGP-USE-rakesh-DV/providers/Microsoft.ContainerRegistry/registries/acrwpaws2dv

  az role assignment create --role "reader" --assignee-object-id "fd224fd3-1fe8-49e8-b5a1-c43ebd83fa45" --description "Role assignment Azure K8s to ACR" --scope "/subscriptions/feff46f9-dc97-49d2-8b37-1a3568022795/resourceGroups/myresourcegroup/providers/Microsoft.ContainerRegistry/registries/mywpaaksacr" 

2.attach acr to aks -- 

  az aks update -n myakscluster -g myresourcegroup --attach-acr "/subscriptions/feff46f9-dc97-49d2-8b37-1a3568022795/resourceGroups/myresourcegroup/providers/Microsoft.ContainerRegistry/registries/mywpaaksacr"

otherwise on successful run you will get output like:-

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1

  kind: Ingress

  metadata:

    name: example

    namespace: foo

  spec:

    ingressClassName: nginx

    rules:

      - host: www.example.com

        http:

          paths:

            - pathType: Prefix

              backend:

                service:

                  name: exampleService

                  port:

                    number: 80

              path: /

    # This section is only required if TLS is to be enabled for the Ingress

    tls:

      - hosts:

        - www.example.com

        secretName: example-tls


If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:


  apiVersion: v1

  kind: Secret

  metadata:

    name: example-tls

    namespace: foo

  data:

    tls.crt: <base64 encoded cert>

    tls.key: <base64 encoded key>

  type: kubernetes.io/tls

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

kubectl create namespace chap2

helm show values ingress-nginx/ingress-nginx

C:\Users\kusha>kubectl get services -n ingress-basic

W1022 14:19:44.492983   18804 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

NAME                                               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE

nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.138.30   20.102.0.61   80:32312/TCP,443:32657/TCP   29m

nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.245.59   <none>        443/TCP                      29m


you will get Exteral public Ip address, since we are creating public Load balancer inside AKS cluster

~~~~~~

Create a file for converting external load balancer to internal load balancer, create a file with name 

  internal-ingress.yaml - Content in the file

controller:

  service:

    loadBalancerIP: 10.5.240.222

    annotations:

      service.beta.kubernetes.io/azure-load-balancer-internal: "true"

     service.beta.kubernetes.io/azure-load-balancer-internal-subnet: ingress-subnet


This is a configuration for a Kubernetes service that is using an Azure load balancer. The loadBalancerIP field specifies the IP address that will be assigned to the load balancer.

The annotations field contains additional configuration for the load balancer. The first annotation, service.beta.kubernetes.io/azure-load-balancer-internal: "true", specifies that the load balancer should be an internal load balancer which means that it can only be accessed from within the same virtual network as the Kubernetes cluster.

The second annotation, service.beta.kubernetes.io/azure-load-balancer-internal-subnet: ingress-subnet, specifies the subnet that the load balancer should be assigned to. In this case, the subnet is named 'ingress-subnet'. It means the load balancer will be assigned to the subnet named 'ingress-subnet' on the virtual network.

This configuration is used to create a Kubernetes service that uses an Azure load balancer and is only accessible from within the virtual network. Additionally, it also specifies that the load balancer should be assigned to the specified subnet.

then upgrade your AKS cluster

helm upgrade -f internal-ingress.yaml nginx-ingress ingress-nginx/ingress-nginx --install -n ingress-basic

you will observe, your nginx-ingress controller service type LoadBalancer will change from public to Private.

C:\Users\kusha>kubectl get services -n ingress-basic

W1022 15:28:03.195338   24860 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

NAME                                               TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE

nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.138.30   10.5.240.222   80:32312/TCP,443:32657/TCP   98m

nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.245.59   <none>         443/TCP                      98m


~~~~~~~~~~~~~~~~~~~~~

For testing create a deployment, service and ingress files. 

apiVersion: apps/v1

kind: Deployment

metadata:

  name: aks-helloworld-one  

spec:

  replicas: 1

  selector:

    matchLabels:

      app: aks-helloworld-one

  template:

    metadata:

      labels:

        app: aks-helloworld-one

    spec:

      containers:

      - name: aks-helloworld-one

        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1

        ports:

        - containerPort: 80

        env:

        - name: TITLE

          value: "Welcome to Azure Kubernetes Service (AKS)"

---

apiVersion: v1

kind: Service

metadata:

  name: aks-helloworld-one  

spec:

  type: ClusterIP

  ports:

  - port: 80

  selector:

    app: aks-helloworld-one

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: aks-helloworld-two  

spec:

  replicas: 1

  selector:

    matchLabels:

      app: aks-helloworld-two

  template:

    metadata:

      labels:

        app: aks-helloworld-two

    spec:

      containers:

      - name: aks-helloworld-two

        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1

        ports:

        - containerPort: 80

        env:

        - name: TITLE

          value: "AKS Ingress Demo"

---

apiVersion: v1

kind: Service

metadata:

  name: aks-helloworld-two  

spec:

  type: ClusterIP

  ports:

  - port: 80

  selector:

    app: aks-helloworld-two

---

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: hello-world-ingress

  annotations:

    nginx.ingress.kubernetes.io/ssl-redirect: "false"

    nginx.ingress.kubernetes.io/use-regex: "true"

    nginx.ingress.kubernetes.io/rewrite-target: /$2

spec:

  ingressClassName: nginx

  rules:

  - http:

      paths:

      - path: /hello-world-one(/|$)(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-one

            port:

              number: 80

      - path: /hello-world-two(/|$)(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-two

            port:

              number: 80

      - path: /(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-one

            port:

              number: 80

---

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: hello-world-ingress-static

  annotations:

    nginx.ingress.kubernetes.io/ssl-redirect: "false"

    nginx.ingress.kubernetes.io/rewrite-target: /static/$2

spec:

  ingressClassName: nginx

  rules:

  - http:

      paths:

      - path: /static(/|$)(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-one

            port: 

              number: 80

---

curl -L http://10.224.0.42

-----

$ curl -L -k http://10.224.0.42/hello-world-two

---

kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --namespace ingress-basic

---

apt-get update && apt-get install -y curl

Create a init containers with example.

 Question: -

Create a YAML manifest for a pod named complex-pod. The main application container named app should use the image nginx and expose the container port 80. Modify the YAML manifest so that the pod defines an init container named setup that uses the image busybox. The init container runs the command wget -O- google.com


Answer: -

you can start by generating the yaml manifest in dry-run mode. the resulting manifest will setup the main application container:


  kubectl run complex-pod --image=nginx --port=80 --dry-run=client -o yaml >complex-pod.yaml

then update the manifest accordingly. 

 1. Add the init container and changing some of the default settings that have been generated. The finalized manifest could look as below.

apiVersion: v1

kind: Pod

metadata:

  creationTimestamp: null

  labels:

    run: complex-pod

  name: complex-pod

spec:

  initContainers:

  - name: setup

    image: busybox

    command: ['sh' , '-c' , 'wget -O- google.com']

  containers:

  - image: nginx

    name: complex-pod

    ports:

    - containerPort: 80

    resources: {}

  dnsPolicy: ClusterFirst

  restartPolicy: Always

status: {}

Create a temporary pod that uses the busybox image to execute a wget command inside of the container.

Q:-

Create a temporary pod that uses the busybox image to execute a wget command inside of the container.

The Wget command should access the endpoint exposed by the nginx container.

You should see the HTML response body rendered in the terminal.

 Ans:- 

kubectl config get-contexts

 kubectl config delete-context 

Get  a context 

az aks get-credentials --resource-group myresourcegroup --name myakscluster

C:\Users\kusha>kubectl config set-context myakscluster --namespace=chap1

Context "myakscluster" modified.


C:\Users\kusha>kubectl config get-contexts

CURRENT   NAME           CLUSTER        AUTHINFO                                   NAMESPACE

*         myakscluster   myakscluster   clusterUser_myresourcegroup_myakscluster   chap1

First Create a pod named nginx

     kubectl run nginx --image=nginx  --port=80 

      kubectl get pod -o wide # get the IP, will be something like '10.5.240.18'


create a temp busybox pod

kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- 10.5.240.18:80


Alternatively, you can also try a more advanced option:


Get IP of the nginx pod

NGINX_IP=$(kubectl get pod nginx -o jsonpath='{.status.podIP}')


create a temp busybox pod

kubectl run busybox --image=busybox --env="NGINX_IP=$NGINX_IP" --rm -it --restart=Never -- sh -c 'wget -O- $NGINX_IP:80'


Or just in one line:


kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- $(kubectl get pod nginx -o jsonpath='{.status.podIP}:{.spec.containers[0].ports[0].containerPort}')

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

kubectl delete ns chap2

Unable to connect to the server: getting credentials: exec: executable kubelogin not found

 Issue:-

C:\Windows\System32>kubectl get all

Unable to connect to the server: getting credentials: exec: executable kubelogin not found

It looks like you are trying to use a client-go credential plugin that is not installed.

To learn more about this feature, consult the documentation available at:

      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

C:\Windows\System32>


Troubleshooting access issues with Azure AD

The steps described below are bypassing the normal Azure AD group authentication. Use them only in an emergency.

If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly.


To do these steps, you'll need to have access to the Azure Kubernetes Service Cluster Admin built-in role.

Azure CLI

az aks get-credentials --resource-group myResourceGroup --name myManagedCluster --admin