About Me

My photo
I am MCSE in Data Management and Analytics with specialization in MS SQL Server and MCP in Azure. I have over 13+ years of experience in IT industry with expertise in data management, Azure Cloud, Data-Canter Migration, Infrastructure Architecture planning and Virtualization and automation. Contact me if you are looking for any sort of guidance in getting your Infrastructure provisioning automated through Terraform. I sometime write for a place to store my own experiences for future search and read by own blog but can hopefully help others along the way. Thanks.

liveness-probe in AKS


Question:- Define a new Pod named web-server with the image nginx in yaml manifast.

expose the container port 80


To define a new Pod named "web-server" with the image "nginx" in a Kubernetes manifest file, you can use the following YAML code:


apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
  - name: web-server
    image: nginx


To expose the container port 80, you can use the "ports" field in the container specification. For example:


apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  containers:
  - name: web-server
    image: nginx
    ports:
    - containerPort: 80

This will define a Pod named "web-server" with a single container based on the "nginx" image. 

The container will expose port 80, which can be accessed from outside the Pod.

In the context of a Kubernetes Pod, "exposing a container port" means making the port available for communication with external entities. When a container port is exposed, it can be accessed from outside the Pod through the Pod's IP address.

In the YAML code provided in the previous response, the container is exposing port 80. This means that the container is listening on port 80 and will accept incoming traffic on that port.

Exposing a container port can be useful when you want to make a service or application running in the container available to other entities, such as other pods or external clients. For example, if the container is running a web server, exposing port 80 will allow clients to access the web server over the network by connecting to the Pod's IP address on port 80.

It's worth noting that exposing a container port does not automatically make the port accessible from outside the cluster. To allow external entities to access the port, you may need to use additional networking resources, such as Services or Ingress resources, to expose the port to the external network.

It's worth noting that this YAML code will only define the Pod and its container. To actually create the Pod and deploy it to a Kubernetes cluster, you will need to use the kubectl command-line tool or the Kubernetes API to submit the manifest file to the cluster.

For example, you can use the following command to create the Pod:

kubectl apply -f pod.yaml

Replace "pod.yaml" with the name of your manifest file. This will create the Pod and deploy it to the cluster.

Question:-

For the container, declare a startup probe of type httpGet. verify that the root context endpoint can be called. use the default configuration for the probe

Answer:- 

To declare a startup probe for a container in a Kubernetes Pod, you can use the "livenessProbe" field in the container specification. A startup probe, also known as a liveness probe, is a Kubernetes feature that allows you to check the health of a container and determine whether it is ready to accept traffic.


To declare a startup probe of type "httpGet" that verifies that the root context endpoint can be called, you can use the following YAML code:

apiVersion: v1

kind: Pod

metadata:

  name: web-server-liveness

spec:

  containers:

  - name: web-server-liveness

    image: nginx

    ports:

    - containerPort: 80

    livenessProbe:

      httpGet:

        path: /

        port: 80

This will define a startup probe that sends an HTTP GET request to the root context endpoint (/) on port 80. If the request succeeds, the container is considered healthy and ready to accept traffic. If the request fails, the container is considered unhealthy and may be restarted or evicted from the Pod.

Another example of liveness Probe:-

By using the default configuration for the probe, you are using the default values for the probe's parameters. These default values include a initial delay of 0 seconds, a timeout of 1 second, and a period of 10 seconds between probes. These values can be customized by setting the "initialDelaySeconds", "timeoutSeconds", and "periodSeconds" fields in the probe specification.

# YAML example

# liveness-pod-example.yaml

#

apiVersion: v1 

kind: Pod 

metadata: 

  name: liveness-command-exec 

  namespace: dec29

spec: 

  containers: 

  - name: liveness 

    image: nginx 

    ports: 

        - containerPort: 80 

    livenessProbe: 

      exec:

        command:

        - cat

        - /usr/share/nginx/html/index.html

      initialDelaySeconds: 2 #Default 0 

      periodSeconds: 2 #Default 10 

      timeoutSeconds: 1 #Default 1 

      successThreshold: 1 #Default 1 

      failureThreshold: 3 #Default 3

# YAML example

# liveness-pod-example.yaml

#

apiVersion: v1 

kind: Pod 

metadata: 

  name: liveness-command-exec 

  namespace: dec29

spec: 

  containers: 

  - name: liveness 

    image: nginx 

    ports: 

        - containerPort: 80 

    livenessProbe: 

      exec:

        command:

        - cat

        - /usr/share/nginx/html/index.html

      initialDelaySeconds: 2 #Default 0 

      periodSeconds: 2 #Default 10 

      timeoutSeconds: 1 #Default 1 

      successThreshold: 1 #Default 1 

      failureThreshold: 3 #Default 3

  

The YAML file provided above is a configuration file for a Kubernetes pod.

It specifies the properties and settings for the pod, including the container image to use, the container port to expose, and the liveness probe configuration.

The liveness probe is a Kubernetes feature that is used to determine whether a container is healthy and should continue running. 

It periodically executes a command in the container to check its status, 

and if the command returns an error, the container is restarted.

The liveness probe in this configuration file is defined using the livenessProbe field. It specifies the command to execute (cat /usr/share/nginx/html/index.html) and the various parameters for the probe, such as the initialDelaySeconds, periodSeconds, timeoutSeconds, successThreshold, and failureThreshold.

 These parameters control the frequency and behavior of the probe.

Overall, this YAML file defines a Kubernetes pod with a liveness probe that is configured to periodically execute a command in the container to check its status and ensure that it is healthy.

apiVersion: v1

kind: Pod

metadata:

  name: liveness-pod

spec:

  containers:

  - image: busybox

    name: app

    args:

    - /bin/sh

    - -c

    - 'while true; do touch /tmp/heartbeat.txt; sleep 5; done;'

    livenessProbe:

      exec:

        command:

        - test `find /tmp/heartbeat.txt -mmin -1`

      initialDelaySeconds: 5

      periodSeconds: 30


The YAML file you provided is a configuration file for a Kubernetes pod. It specifies the properties and settings for the pod, including the container image to use and the liveness probe configuration.

The liveness probe is a Kubernetes feature that is used to determine whether a container is healthy and should continue running. It periodically executes a command in the container to check its status, and if the command returns an error, the container is restarted.


The liveness probe in this configuration file is defined using the livenessProbe field. It specifies the command to execute (test find /tmp/heartbeat.txt -mmin -1) and the various parameters for the probe, such as the initialDelaySeconds and periodSeconds. These parameters control the frequency and behavior of the probe.


The command specified in the liveness probe will execute the test command with the find /tmp/heartbeat.txt -mmin -1 argument, which will check for the presence of the /tmp/heartbeat.txt file in the container's filesystem. If the file is present and was modified within the last minute, the command will return a success status. Otherwise, it will return an error status.

The container in this pod is configured to run the while true; do touch /tmp/heartbeat.txt; sleep 5; done; command, which will create the /tmp/heartbeat.txt file and update its modification time every 5 seconds. 

This will ensure that the liveness probe always finds the /tmp/heartbeat.txt file and returns a success status, indicating that the container is healthy.

Overall, this YAML file defines a Kubernetes pod with a liveness probe that is configured to periodically execute a command in the container to check for the presence of the /tmp/heartbeat.txt file and ensure that the container is healthy.


Installing chocolatey software other required softwares for AzureDevOps purpose

Chocolatey is a package manager for Windows that allows you to install and manage software packages from the command line. To install Chocolatey on a Windows 10 machine, you can follow these steps:

Open a new command prompt window and run the following command, these commands will execute one by one and follow software will get installed.


@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

Example:-

C:\Windows\system32>@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe " -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

'C:\Users\rakeshkumar\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'.

Chocolatey (choco.exe) is now ready.

You can call choco from anywhere, command line or powershell by typing choco.

Run choco /? for a list of functions.

You may need to shut down and restart powershell and/or consoles

 first prior to using choco.

Ensuring Chocolatey commands are on the path

Ensuring chocolatey.nupkg is in the lib folder


C:\Windows\system32>


The command provided abbove is a Windows PowerShell script that installs the Chocolatey package manager and adds it to the system PATH.


Chocolatey is a package manager for Windows that allows you to easily install, update, and manage software packages on your system. It is similar to package managers like apt or yum that are commonly used on Linux systems.


The script begins by calling the PowerShell executable with the -NoProfile, -InputFormat None, and -ExecutionPolicy Bypass options, which allow the script to run without loading a user profile and with unrestricted execution policies.


Next, the script uses the iex command to execute the output of the DownloadString method of the System.Net.WebClient object. This method downloads the install.ps1 script from the Chocolatey website and executes it, which installs the Chocolatey package manager on the system.


Finally, the script sets the system PATH environment variable to include the %ALLUSERSPROFILE%\chocolatey\bin directory, which allows you to use the Chocolatey command-line tools from any location on the system.


Overall, this script installs the Chocolatey package manager and sets it up for use on your system.

choco install azure-cli --yes

choco install kubernetes-helm --yes

choco install kubernetes-cli --yes

choco install notepadplusplus --yes

choco install vscode --yes

choco install 7zip  --yes

choco install git --yes

choco install terraform --yes

Verification:-

Once the installation is complete, you can verify that Chocolatey is installed and working correctly by running the following command

choco --version

choco list -l azure-cli

choco list -l kubectl

choco list -l kubernetes-helm

choco list -l notepadplusplus

choco list -l vscode

choco list -l terraform

choco list -l git

choco list -l 7zip

Multicontainer pod with volume of emptytype Vs one pod with multiple PVC attached.

 kushagrarakesh/multicontainers-empdir.yaml at main · kushagrarakesh/kushagrarakesh (github.com)

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: multicontainer-empdir
spec:
  volumes:
  - name: var-logs
    emptyDir: {}
  containers:
  - image: busybox
    name: busybox1
    args:
    - bin/sh
    - -c
    - ls; sleep 3600
    volumeMounts:
    - name: var-logs
      mountPath: /usr/share/nginx/htm        
    resources: {}
  - image: alpine:latest
    name: alpine
    args:
    - bin/sh
    - -c
    - ls; sleep 3600
    volumeMounts:
    - name: var-logs
      mountPath: /usr/share/nginx/htm        
    resources: {}
  - image: nginx:latest
    name: nginx
    args:
    - bin/sh
    - -c
    - ls; sleep 3600
    volumeMounts:
    - name: var-logs
      mountPath: /usr/share/nginx/htm         
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}


use cases:- 


PS C:\Users\kusha\chap5> kubectl exec  -it  multicontainer-empdir -c nginx   -- sh 

W1028 11:23:05.759993    6300 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

# cd /usr/share/nginx/htm

# pwd

/usr/share/nginx/htm

#

This script is setting up a deployment for a SQL Server instance in a Kubernetes cluster with persistent storage using Azure Disk.

The script defines a StorageClass named azure-disk which will provision Azure disks for the Kubernetes cluster. Two PersistentVolumeClaims (PVCs) are defined, one for the mssql data folder and another for the mssql log folder. 

Each PVC specifies that it will use the azure-disk StorageClass and requests a specific amount of storage.

The Deployment definition includes two volume mounts, one for the mssql data folder and one for the mssql log folder. The mssqldb volume mount is mapped to the mssql-data PVC and the mssqllog volume mount is mapped to the mssql-log PVC. The SQL Server container in the deployment will have access to both of these volume mounts.

A Service is defined to expose the SQL Server deployment on port 1433 using a LoadBalancer type service. Finally, a Secret named mssql is defined that contains the SQL Server SA password encoded in base64.

---

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
     name: azure-disk
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: Managed
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mssql-data
  annotations:
    volume.beta.kubernetes.io/storage-class: azure-disk
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mssql-log
  annotations:
    volume.beta.kubernetes.io/storage-class: azure-disk
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
     matchLabels:
       app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      hostname: mssqlinst
      securityContext:
        fsGroup: 10001
      containers:
      - name: mssql
        image: mcr.microsoft.com/mssql/server:2019-latest
        resources:
          requests:
            memory: "2G"
            cpu: "2000m"
          limits:
            memory: "2G"
            cpu: "2000m"
        ports:
        - containerPort: 1433
        env:
        - name: MSSQL_PID
          value: "Developer"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: MSSQL_SA_PASSWORD
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssqldata
        - name: mssqllog
          mountPath: /var/opt/mssqllog          
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data
      - name: mssqllog
        persistentVolumeClaim:
          claimName: mssql-log          

---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer
---
apiVersion: v1
data:
  MSSQL_SA_PASSWORD: TXlDMG05bCZ4UEBzc3cwcmQ=
kind: Secret
metadata:
  creationTimestamp: null
  name: mssql


Verify the services are running. Run the following command:

Use Case :- 

get the service Ip address 

kubectl get services 

C:\Users\kusha>kubectl get services

NAME                                             TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE

mssql-deployment                                 LoadBalancer   10.0.130.195   20.85.151.114   1433:30264/TCP               66m


You can use the following applications to connect to the SQL Server instance.


  • SQL Server Managed Studio (SSMS)
  • SQL Server Data Tools (SSDT)
  • Azure Data Studio
  • Connect with sqlcmd

To connect with sqlcmd, run the following command:

Windows Command Prompt

sqlcmd -S  20.85.151.114  -U sa -P "MyC0m9l&xP@ssw0rd"

Replace the following values:

<External IP Address> with the IP address for the mssql-deployment service

MyC0m9l&xP@ssw0rd with your complex password

How to create multicontainer pod in AKS

Multi container pod example: -

kushagrarakesh/multicontainerpodexample.yaml at main · kushagrarakesh/kushagrarakesh (github.com)

kubectl create ns chap5

kubectl apply -f multicontinerpodexample.yaml -n chap5


kubectl get pods -n chap5

kubectl describe pod multicontainer -n chap5


PS C:\Users\kusha\chap5> kubectl logs multicontainer  -n chap5

W1028 11:09:06.926695   20740 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

Defaulted container "busybox1" out of: busybox1, alpine, nginx

bin

dev

etc

home

proc

root

sys

tmp

usr

var

PS C:\Users\kusha\chap5> 


this means, it is default showing log of container busybox1  out of: busybox1, alpine, nginx

if you want to see log of specific container specify like below:-


S C:\Users\kusha\chap5> kubectl logs multicontainer -c alpine   -n chap5

W1028 11:10:38.540677   18808 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

bin

dev

etc

home

lib

media

mnt

opt

proc

root

run

sbin

srv

sys

tmp

usr

var

PS C:\Users\kusha\chap5> 



Create an ingress controller in Azure Kubernetes Service (AKS)

                                           Ingress Controller

An ingress controller is a piece of software that runs within a Kubernetes cluster and listens for incoming HTTP and HTTPS traffic. 

It is responsible for routing traffic from the external internet to the appropriate service within the cluster. 

Ingress controllers are implemented as pods within the cluster, and they use a set of rules defined in an ingress resource to determine how traffic should be routed. 

Ingress controllers are commonly used to expose services to the internet and to load balance traffic to multiple replicas of a service.

 Some examples of ingress controllers include NGINX, HAProxy, and Envoy.

To create an ingress controller in Azure Kubernetes Service (AKS), you will need to perform the following steps:


Deploy an AKS cluster: If you don't already have an AKS cluster, you will need to create one. You can do this using the Azure portal, Azure CLI, or Azure PowerShell.


Install the NGINX Ingress Controller: The NGINX Ingress Controller is an open source ingress controller that you can use to expose your AKS services to the internet. To install it, you will need to create a Kubernetes deployment that installs the NGINX Ingress Controller pods and associated resources on your AKS cluster.

Create an ingress resource: An ingress resource is a Kubernetes resource that defines how external traffic should be routed to the services in your AKS cluster. To create an ingress resource, you will need to define the rules for routing traffic to your services using YAML files and then apply them to your AKS cluster using the kubectl command line tool.

Expose your services: Once you have created an ingress resource, you can use it to expose your AKS services to the internet by creating an Azure Load Balancer and associating it with your ingress resource. This will allow external traffic to be routed to your services using the rules defined in your ingress resource.

1) Run in the Bash shell - below script will import image from registry.k8s.io to your ACR

Explanation:-

This is a series of command lines in the Azure CLI (Command Line Interface) that are used to import images from a source container registry (in this case, "registry.k8s.io") to a destination container registry (in this case, "mywpaaksacr"). 

The specific images being imported are: "ingress-nginx/controller" with tag "v1.4.0", "ingress-nginx/kube-webhook-certgen" with tag "v20220916-gd32f8c343", and "defaultbackend-amd64" with tag "1.5". 

These images are being imported using the az acr import command, which is used to import images from another container registry.

 The --name flag specifies the name of the destination container registry, and the --source and --image flags specify the source and destination images, respectively.


REGISTRY_NAME=mywpaaksacr

SOURCE_REGISTRY=registry.k8s.io

CONTROLLER_IMAGE=ingress-nginx/controller

CONTROLLER_TAG=v1.4.0

PATCH_IMAGE=ingress-nginx/kube-webhook-certgen

PATCH_TAG=v20220916-gd32f8c343

DEFAULTBACKEND_IMAGE=defaultbackend-amd64

DEFAULTBACKEND_TAG=1.5

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG

---

2) Below script will create ingress-nginx repository to the local and install nginx ingress controller to the AKS clsuter

# Add the ingress-nginx repository

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Set variable for ACR location to use for pulling images

ACR_URL=mywpaaksacr.azurecr.io

# Use Helm to deploy an NGINX ingress controller

helm install nginx-ingress ingress-nginx/ingress-nginx \

    --version 4.3.0 \

    --namespace ingress-basic \

    --create-namespace \

    --set controller.replicaCount=2 \

    --set controller.nodeSelector."kubernetes\.io/os"=linux \

    --set controller.image.registry=$ACR_URL \

    --set controller.image.image=$CONTROLLER_IMAGE \

    --set controller.image.tag=$CONTROLLER_TAG \

    --set controller.image.digest="" \

    --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \

    --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \

    --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \

    --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \

    --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \

    --set controller.admissionWebhooks.patch.image.digest="" \

    --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \

    --set defaultBackend.image.registry=$ACR_URL \

    --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \

    --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \

    --set defaultBackend.image.digest=""

you will get below error since incase you have not given AcrPull permission and attach ACR to AKS 

Events:

  Type     Reason     Age                  From               Message

  ----     ------     ----                 ----               -------

  Normal   Scheduled  10m                  default-scheduler  Successfully assigned ingress-basic/nginx-ingress-ingress-nginx-admission-create-6psn6 to aks-agentpool-36778052-vmss000000

  Normal   Pulling    9m3s (x4 over 10m)   kubelet            Pulling image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1"

  Warning  Failed     9m3s (x4 over 10m)   kubelet            Failed to pull image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1": rpc error: code = Unknown desc = failed to pull and unpack image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1": failed to resolve reference "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized

  Warning  Failed     9m3s (x4 over 10m)   kubelet            Error: ErrImagePull

  Warning  Failed     8m51s (x6 over 10m)  kubelet            Error: ImagePullBackOff

  Normal   BackOff    19s (x43 over 10m)   kubelet            Back-off pulling image "mywpaaksacr.azurecr.io/ingress-nginx/kube-webhook-certgen:v1.1.1"


In order to resolve this error:-

1.acr pull  and 2. reader.

CLUSTER_RESOURCE_ID=$(az aks show --name aks-use-spoke-dv  --resource-group RGP-USE-rakesh-DV  --query id --output tsv)

SP_OBJECT_ID=$(az resource show --id $CLUSTER_RESOURCE_ID --api-version 2022-11-01 --query identity.principalId --output tsv)

az role assignment create --assignee $SP_OBJECT_ID --role acrpull --scope /subscriptions/feff46f9-dc97-49d2-8b37-1a3568022795/resourceGroups/RGP-USE-rakesh-DV/providers/Microsoft.ContainerRegistry/registries/acrwpaws2dv

  az role assignment create --role "reader" --assignee-object-id "fd224fd3-1fe8-49e8-b5a1-c43ebd83fa45" --description "Role assignment Azure K8s to ACR" --scope "/subscriptions/feff46f9-dc97-49d2-8b37-1a3568022795/resourceGroups/myresourcegroup/providers/Microsoft.ContainerRegistry/registries/mywpaaksacr" 

2.attach acr to aks -- 

  az aks update -n myakscluster -g myresourcegroup --attach-acr "/subscriptions/feff46f9-dc97-49d2-8b37-1a3568022795/resourceGroups/myresourcegroup/providers/Microsoft.ContainerRegistry/registries/mywpaaksacr"

otherwise on successful run you will get output like:-

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1

  kind: Ingress

  metadata:

    name: example

    namespace: foo

  spec:

    ingressClassName: nginx

    rules:

      - host: www.example.com

        http:

          paths:

            - pathType: Prefix

              backend:

                service:

                  name: exampleService

                  port:

                    number: 80

              path: /

    # This section is only required if TLS is to be enabled for the Ingress

    tls:

      - hosts:

        - www.example.com

        secretName: example-tls


If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:


  apiVersion: v1

  kind: Secret

  metadata:

    name: example-tls

    namespace: foo

  data:

    tls.crt: <base64 encoded cert>

    tls.key: <base64 encoded key>

  type: kubernetes.io/tls

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

kubectl create namespace chap2

helm show values ingress-nginx/ingress-nginx

C:\Users\kusha>kubectl get services -n ingress-basic

W1022 14:19:44.492983   18804 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

NAME                                               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE

nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.138.30   20.102.0.61   80:32312/TCP,443:32657/TCP   29m

nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.245.59   <none>        443/TCP                      29m


you will get Exteral public Ip address, since we are creating public Load balancer inside AKS cluster

~~~~~~

Create a file for converting external load balancer to internal load balancer, create a file with name 

  internal-ingress.yaml - Content in the file

controller:

  service:

    loadBalancerIP: 10.5.240.222

    annotations:

      service.beta.kubernetes.io/azure-load-balancer-internal: "true"

     service.beta.kubernetes.io/azure-load-balancer-internal-subnet: ingress-subnet


This is a configuration for a Kubernetes service that is using an Azure load balancer. The loadBalancerIP field specifies the IP address that will be assigned to the load balancer.

The annotations field contains additional configuration for the load balancer. The first annotation, service.beta.kubernetes.io/azure-load-balancer-internal: "true", specifies that the load balancer should be an internal load balancer which means that it can only be accessed from within the same virtual network as the Kubernetes cluster.

The second annotation, service.beta.kubernetes.io/azure-load-balancer-internal-subnet: ingress-subnet, specifies the subnet that the load balancer should be assigned to. In this case, the subnet is named 'ingress-subnet'. It means the load balancer will be assigned to the subnet named 'ingress-subnet' on the virtual network.

This configuration is used to create a Kubernetes service that uses an Azure load balancer and is only accessible from within the virtual network. Additionally, it also specifies that the load balancer should be assigned to the specified subnet.

then upgrade your AKS cluster

helm upgrade -f internal-ingress.yaml nginx-ingress ingress-nginx/ingress-nginx --install -n ingress-basic

you will observe, your nginx-ingress controller service type LoadBalancer will change from public to Private.

C:\Users\kusha>kubectl get services -n ingress-basic

W1022 15:28:03.195338   24860 azure.go:92] WARNING: the azure auth plugin is deprecated in v1.22+, unavailable in v1.26+; use https://github.com/Azure/kubelogin instead.

To learn more, consult https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

NAME                                               TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE

nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.138.30   10.5.240.222   80:32312/TCP,443:32657/TCP   98m

nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.245.59   <none>         443/TCP                      98m


~~~~~~~~~~~~~~~~~~~~~

For testing create a deployment, service and ingress files. 

apiVersion: apps/v1

kind: Deployment

metadata:

  name: aks-helloworld-one  

spec:

  replicas: 1

  selector:

    matchLabels:

      app: aks-helloworld-one

  template:

    metadata:

      labels:

        app: aks-helloworld-one

    spec:

      containers:

      - name: aks-helloworld-one

        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1

        ports:

        - containerPort: 80

        env:

        - name: TITLE

          value: "Welcome to Azure Kubernetes Service (AKS)"

---

apiVersion: v1

kind: Service

metadata:

  name: aks-helloworld-one  

spec:

  type: ClusterIP

  ports:

  - port: 80

  selector:

    app: aks-helloworld-one

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: aks-helloworld-two  

spec:

  replicas: 1

  selector:

    matchLabels:

      app: aks-helloworld-two

  template:

    metadata:

      labels:

        app: aks-helloworld-two

    spec:

      containers:

      - name: aks-helloworld-two

        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1

        ports:

        - containerPort: 80

        env:

        - name: TITLE

          value: "AKS Ingress Demo"

---

apiVersion: v1

kind: Service

metadata:

  name: aks-helloworld-two  

spec:

  type: ClusterIP

  ports:

  - port: 80

  selector:

    app: aks-helloworld-two

---

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: hello-world-ingress

  annotations:

    nginx.ingress.kubernetes.io/ssl-redirect: "false"

    nginx.ingress.kubernetes.io/use-regex: "true"

    nginx.ingress.kubernetes.io/rewrite-target: /$2

spec:

  ingressClassName: nginx

  rules:

  - http:

      paths:

      - path: /hello-world-one(/|$)(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-one

            port:

              number: 80

      - path: /hello-world-two(/|$)(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-two

            port:

              number: 80

      - path: /(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-one

            port:

              number: 80

---

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: hello-world-ingress-static

  annotations:

    nginx.ingress.kubernetes.io/ssl-redirect: "false"

    nginx.ingress.kubernetes.io/rewrite-target: /static/$2

spec:

  ingressClassName: nginx

  rules:

  - http:

      paths:

      - path: /static(/|$)(.*)

        pathType: Prefix

        backend:

          service:

            name: aks-helloworld-one

            port: 

              number: 80

---

curl -L http://10.224.0.42

-----

$ curl -L -k http://10.224.0.42/hello-world-two

---

kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --namespace ingress-basic

---

apt-get update && apt-get install -y curl

Create a init containers with example.

 Question: -

Create a YAML manifest for a pod named complex-pod. The main application container named app should use the image nginx and expose the container port 80. Modify the YAML manifest so that the pod defines an init container named setup that uses the image busybox. The init container runs the command wget -O- google.com


Answer: -

you can start by generating the yaml manifest in dry-run mode. the resulting manifest will setup the main application container:


  kubectl run complex-pod --image=nginx --port=80 --dry-run=client -o yaml >complex-pod.yaml

then update the manifest accordingly. 

 1. Add the init container and changing some of the default settings that have been generated. The finalized manifest could look as below.

apiVersion: v1

kind: Pod

metadata:

  creationTimestamp: null

  labels:

    run: complex-pod

  name: complex-pod

spec:

  initContainers:

  - name: setup

    image: busybox

    command: ['sh' , '-c' , 'wget -O- google.com']

  containers:

  - image: nginx

    name: complex-pod

    ports:

    - containerPort: 80

    resources: {}

  dnsPolicy: ClusterFirst

  restartPolicy: Always

status: {}

Create a temporary pod that uses the busybox image to execute a wget command inside of the container.

Q:-

Create a temporary pod that uses the busybox image to execute a wget command inside of the container.

The Wget command should access the endpoint exposed by the nginx container.

You should see the HTML response body rendered in the terminal.

 Ans:- 

kubectl config get-contexts

 kubectl config delete-context 

Get  a context 

az aks get-credentials --resource-group myresourcegroup --name myakscluster

C:\Users\kusha>kubectl config set-context myakscluster --namespace=chap1

Context "myakscluster" modified.


C:\Users\kusha>kubectl config get-contexts

CURRENT   NAME           CLUSTER        AUTHINFO                                   NAMESPACE

*         myakscluster   myakscluster   clusterUser_myresourcegroup_myakscluster   chap1

First Create a pod named nginx

     kubectl run nginx --image=nginx  --port=80 

      kubectl get pod -o wide # get the IP, will be something like '10.5.240.18'


create a temp busybox pod

kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- 10.5.240.18:80


Alternatively, you can also try a more advanced option:


Get IP of the nginx pod

NGINX_IP=$(kubectl get pod nginx -o jsonpath='{.status.podIP}')


create a temp busybox pod

kubectl run busybox --image=busybox --env="NGINX_IP=$NGINX_IP" --rm -it --restart=Never -- sh -c 'wget -O- $NGINX_IP:80'


Or just in one line:


kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- $(kubectl get pod nginx -o jsonpath='{.status.podIP}:{.spec.containers[0].ports[0].containerPort}')

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

kubectl delete ns chap2

Unable to connect to the server: getting credentials: exec: executable kubelogin not found

 Issue:-

C:\Windows\System32>kubectl get all

Unable to connect to the server: getting credentials: exec: executable kubelogin not found

It looks like you are trying to use a client-go credential plugin that is not installed.

To learn more about this feature, consult the documentation available at:

      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

C:\Windows\System32>


Troubleshooting access issues with Azure AD

The steps described below are bypassing the normal Azure AD group authentication. Use them only in an emergency.

If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly.


To do these steps, you'll need to have access to the Azure Kubernetes Service Cluster Admin built-in role.

Azure CLI

az aks get-credentials --resource-group myResourceGroup --name myManagedCluster --admin

Configure Azure CNI networking in Azure Kubernetes Service (AKS) & Create an ingress controller with a static public IP address in Azure Kubernetes Service (AKS)

By default, AKS clusters use kubenet, and a virtual network and subnet are created for you. With kubenet, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.

With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.

This article shows you how to use Azure CNI networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see Network concepts for Kubernetes and AKS.


An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. 

Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. 

Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.

This article shows you how to deploy the NGINX ingress controller in an Azure Kubernetes Service (AKS) cluster. The ingress controller is configured with a static public IP address. The cert-manager project is used to automatically generate and configure Let's Encrypt certificates. Finally, two applications are run in the AKS cluster, each of which is accessible over a single IP address.


Prerequisite :-

  • The cluster identity used by the AKS cluster must have at least Network Contributor permissions on the subnet within your virtual network. If you wish to define a custom role instead of using the built-in Network Contributor role, the following permissions are required:
    • Microsoft.Network/virtualNetworks/subnets/join/action
    • Microsoft.Network/virtualNetworks/subnets/read


# Update the extension to make sure you have the latest version installed
az extension update --name aks-preview
az feature register --namespace "Microsoft.ContainerService" --name "PodSubnetPreview"
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/PodSubnetPreview')].{Name:name,State:properties.state}"
az provider register --namespace Microsoft.ContainerService
~~~~~~~~~~~~~~~~~~~~ once in a life time within Azure Subscription ~~~~~~~~~~~~~~~~
resourceGroup="myResourceGroup"
vnet="myVirtualNetwork"
location="eastus"
clusterName="myAKSCluster"
subscription="XXXXX-dc97-49d2-XXXX-1XXXXXXX"
vnet="myVirtualNetwork"
# Create the resource group
az group create --name $resourceGroup --location $location
# Create our two subnet network
az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none

az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none

az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
#Create a AKS Cluster
az aks create -n $clusterName -g $resourceGroup -l $location \
 --max-pods 250 \
 --node-count 2 \
 --network-plugin azure \
 --generate-ssh-keys    \
 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
 --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet

#Configure ACR integration for existing AKS clusters
MYACR=wpaContainerRegistry
resourceGroup="myResourceGroup"
# Run the following line to create an Azure Container Registry if you do not already have one
az acr create -n $MYACR -g $resourceGroup --sku standard
#Attach  ACR integration for existing AKS clusters
az aks update -n myAKSCluster -g myResourceGroup --attach-acr wpaContainerRegistry

Import the images used by the Helm chart into your ACR

REGISTRY_NAME= wpaContainerRegistry
SOURCE_REGISTRY=k8s.gcr.io
CONTROLLER_IMAGE=ingress-nginx/controller
CONTROLLER_TAG=v1.0.4
PATCH_IMAGE=ingress-nginx/kube-webhook-certgen
PATCH_TAG=v1.1.1
DEFAULTBACKEND_IMAGE=defaultbackend-amd64
DEFAULTBACKEND_TAG=1.5
CERT_MANAGER_REGISTRY=quay.io
CERT_MANAGER_TAG=v1.5.4
CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller
CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook
CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector

az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image
 $CONTROLLER_IMAGE:$CONTROLLER_TAG
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG

Next, create a public IP address with the static allocation method using the az network public-ip create command. The following example creates a public IP address named myAKSPublicIP in the AKS cluster resource group obtained in the previous step:
#Create a public IP Address
az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv


#Create an ingress controller with a static public IP address in Azure Kubernetes Service (AKS)
ACR_URL="wpacontainerregistry.azurecr.io"
STATIC_IP="104.211.52.25"
DNS_LABEL="mywayorhighway"
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--version 4.0.13 \
--namespace ingress-basic --create-namespace \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \
--set controller.image.image=$CONTROLLER_IMAGE \
--set controller.image.tag=$CONTROLLER_TAG \
--set controller.image.digest="" \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
--set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
--set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set controller.admissionWebhooks.patch.image.digest="" \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \
--set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
--set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
--set defaultBackend.image.digest="" \
--set controller.service.loadBalancerIP=$STATIC_IP \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL

~~~~~~~~~~~~~~~~~aks-helloworld-one.yaml~~~~~~~~~~~~~

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-helloworld-one
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-helloworld-one
  template:
    metadata:
      labels:
        app: aks-helloworld-one
    spec:
      containers:
      - name: aks-helloworld-one
        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "Welcome to Azure Kubernetes Service (AKS)"
---
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld-one
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: aks-helloworld-one
~~~~~~~~~~~~~~~~~~~~~~~ End of the file aks-helloworld-one~~~~~~~~~~~~~~~~

$> kubectl apply -f aks-helloworld-one -n ingress-basic
~~~~~~~~~~~~~~~~~~~~~~~~~~begin  of the file aks-helloworld-two.yaml~~~~~~~~~~~
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-helloworld-two
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-helloworld-two
  template:
    metadata:
      labels:
        app: aks-helloworld-two
    spec:
      containers:
      - name: aks-helloworld-two
        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "AKS Ingress Demo"
---
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld-two
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: aks-helloworld-two
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End of the file aks-helloworld-two~~~~~~~~~~~~~~~~
$> kubectl apply -f aks-helloworld-two -n ingress-basic

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Start of the file aks-helloworld-three.yaml~~~~~~~~~~~~~~~~

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-helloworld-three
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-helloworld-three
  template:
    metadata:
      labels:
        app: aks-helloworld-three
    spec:
      containers:
      - name: aks-helloworld-three
        image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "AKS Ingress Demo for aspnet"
---
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld-three
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: aks-helloworld-three
~~~~~~~~~~~~~~~~~~~~end of the file aks-helloworld-three.yaml~~~~~~~~~~

$> kubectl apply -f aks-helloworld-three -n ingress-basic

~~~~~~~~~~~~~~~~~Start of the hello-world-ingress.yaml~~~~~~~~~~~~~~~~

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-staging
    nginx.ingress.kubernetes.io/rewrite-target: /$1
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  tls:
  - hosts:
    - mywayorhighway.eastus.cloudapp.azure.com
    secretName: tls-secret
  rules:
  - host: mywayorhighway.eastus.cloudapp.azure.com
    http:
      paths:
      - path: /hello-world-one(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80
      - path: /hello-world-two(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-two
            port:
              number: 80
      - path: /hello-world-three(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-three
            port:
              number: 80              
      - path: /(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80

~~~~~~ End of the hello-world-ingress~~~~~~~~~~~~~~

 kubectl apply -f hello-world-ingress -n ingress-basic


Verify certificate object

Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see cert-manager certificates.

Cert-manager has likely automatically created a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. For more information, see the ingress-shim documentation.

To verify that the certificate was created successfully, use the 

kubectl describe certificate tls-secret --namespace ingress-basic command.

output:-


Owner References:

    API Version:           networking.k8s.io/v1

    Block Owner Deletion:  true

    Controller:            true

    Kind:                  Ingress

    Name:                  hello-world-ingress

    UID:                   834c59a2-571a-4486-94fd-01b9a52ef132

  Resource Version:        129384

  UID:                     d363a50a-b23f-41f3-ab25-07b96de68598

Spec:

  Dns Names:

    mywayorhighway.eastus.cloudapp.azure.com

  Issuer Ref:

    Group:      cert-manager.io

    Kind:       ClusterIssuer

    Name:       letsencrypt-staging

  Secret Name:  tls-secret

  Usages:

    digital signature

    key encipherment

Status:

  Conditions:

    Last Transition Time:  2022-02-06T16:34:52Z

    Message:               Certificate is up to date and has not expired

    Observed Generation:   1

    Reason:                Ready

    Status:                True

    Type:                  Ready

  Not After:               2022-05-07T15:34:50Z

  Not Before:              2022-02-06T15:34:51Z

  Renewal Time:            2022-04-07T15:34:50Z

  Revision:                1

Events:

  Type    Reason     Age   From          Message

  ----    ------     ----  ----          -------

  Normal  Issuing    69m   cert-manager  Issuing certificate as Secret does not exist

  Normal  Generated  69m   cert-manager  Stored new private key in temporary Secret resource "tls-secret-hqgnt"

  Normal  Requested  69m   cert-manager  Created new CertificateRequest resource "tls-secret-whkqg"

  Normal  Issuing    69m   cert-manager  The certificate has been successfully issued

udr@Azure:~$


URL accessible :-

https://mywayorhighway.eastus.cloudapp.azure.com/hello-world-three

https://mywayorhighway.eastus.cloudapp.azure.com/hello-world-two

https://mywayorhighway.eastus.cloudapp.azure.com/hello-world-one

https://mywayorhighway.eastus.cloudapp.azure.com