About Me

My photo
I am MCSE in Data Management and Analytics with specialization in MS SQL Server and MCP in Azure. I have over 13+ years of experience in IT industry with expertise in data management, Azure Cloud, Data-Canter Migration, Infrastructure Architecture planning and Virtualization and automation. Contact me if you are looking for any sort of guidance in getting your Infrastructure provisioning automated through Terraform. I sometime write for a place to store my own experiences for future search and read by own blog but can hopefully help others along the way. Thanks.

Generate an Azure Application Gateway self-signed certificate with a custom root CA



https://slproweb.com/products/Win32OpenSSL.html

  download 32 bit.
then follow below link:-
https://docs.microsoft.com/bs-latn-ba/azure/application-gateway/self-signed-certificates

at section Generate the certificate with the CSR and the key and sign it with the CA’s root key

 instead of

openssl x509 -req -in fabrikam.csr -CA public.crt -CAkey contoso.key -CAcreateserial -out fabrikam.crt -days 365 -sha256

 use this
openssl x509 -req -in fabrikam.csr -CA contoso.crt -CAkey contoso.key -CAcreateserial -out fabrikam.crt -days 365 -sha256

then use below to merge fabrikam.key + fabrikam.crt to fabrikam.pfx

Refer blog

https://www.ssl.com/how-to/create-a-pfx-p12-certificate-file-using-openssl/

command:-
openssl pkcs12 -export -out fabrikam.pfx -inkey fabrikam.key -in fabrikam.crt

and

openssl pkcs12 -export -out contoso.pfx -inkey contoso.key -in contoso.crt


then continue with
https://docs.microsoft.com/bs-latn-ba/azure/application-gateway/self-signed-certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

openssl ecparam -out rakeshca.key -name prime256v1 -genkey

openssl req -new -sha256 -key rakeshca.key -out rakeshca.csr

openssl x509 -req -sha256 -days 365 -in rakeshca.csr -signkey rakeshca.key -out rakeshca.crt


~~~~~ server certificate  named rakeshdevops.com issuer is  rakeshca~~~~~~~~~~~


openssl ecparam -out rakeshdevops.key -name prime256v1 -genkey

openssl req -new -sha256 -key rakeshdevops.key -out rakeshdevops.csr

openssl x509 -req -in rakeshdevops.csr -CA  rakeshca.crt -CAkey rakeshca.key -CAcreateserial -out rakeshdevops.crt -days 365 -sha256

openssl x509 -in rakeshdevops.crt -text -noout


Export:-

openssl pkcs12 -export -out rakeshdevops.pfx -inkey rakeshdevops.key -in rakeshdevops.crt


~~~~~other server certificate  named punamdevops.com issuer is  ~~~~~~~~~~~~~~~~~rakeshca~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


openssl ecparam -out punamdevops.key -name prime256v1 -genkey

openssl req -new -sha256 -key punamdevops.key -out punamdevops.csr

openssl x509 -req -in punamdevops.csr -CA  rakeshca.crt -CAkey rakeshca.key -CAcreateserial -out punamdevops.crt -days 365 -sha256

openssl x509 -in punamdevops.crt -text -noout


 Export:-

openssl pkcs12 -export -out punamdevops.pfx -inkey punamdevops.key -in punamdevops.crt




openssl pkcs12 -export -out rakeshca.pfx -inkey rakeshca.key -in rakeshca.crt


openssl s_client -connect localhost:443 -servername www.rakeshdevops.com -showcerts


How to ensure Compliance with Azure Policies


Azure Policy service with real world example


Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements. Azure Policy meets this need by evaluating your resources for non-compliance with assigned policies. For example, you can have a policy to allow only a certain SKU size of virtual machines in your environment. Once this policy is implemented, new and existing resources are evaluated for compliance. With the right type of policy, existing resources can be brought into compliance.

We will create real Azure resources, assign real auditable flags (in the form of Azure tags), and then create a policy to prove the audit state of the objects with Azure policy reporting.

Create Two Virtual Networks
1.    Create the first virtual network.
·         The name can be anything ("HubvNet1" in this example).
·         The primary address space should 10.0.0.0/24.
·         The subnet address range should be 10.0.0.0/26.
2.    Create a second virtual network.
·         The name can be anything ("SpokeVnet1" in this example).
·         The primary address space should 10.10.10.0/24.
·         The subnet address range should be 10.10.10.0/26.
Create a Tag for Each Virtual Network
1.    Add a tag to HubvNet1.
·         Name: Audit
·         Value: Yes
2.    Add a tag to SpokeVnet1.
·         Name: Audit
·         Value: No
Create a Policy
     in the case of the subscriptions and resource groups, the only available option.
1.    Create a policy> Compliance > Assign Policy
2.    Narrow the scope to our resource group.

Go to BASIC and click on rectangle button


3.    Search "Tag" in the available policy definitions list.
4.    Choose Require tag and its value.


5.    Set a Tag Name of Audit and Tag value to Yes


7.    After 15–30 minutes, narrow the scope of the Compliance blade to the resource group, and it should refresh to show the policy as non-compliant: 50%.




KB2919355 Check Failed. If you have installed KB2919355 on Windows Server 2012 for SQL Server 2017 installation


in some Windows Server  2012 R2, when you install SQL Server 2017 developer/enterprise  edition
The validation check gets fail.


Issue:-





---------------------------
Rule Check Result
---------------------------
Rule "KB2919355 Installation" failed.

KB2919355 Check Failed. If you have installed KB2919355, please make sure you have restarted your machine. For more information, you can visit https://support.microsoft.com/kb/2919355/
---------------------------
OK 
---------------------------


Resolution:-

In order to resolve this error..

you have to do the following

First go ahead and download this KB -- Windows8.1-KB2919442-x64 and install

Then download   Windows8.1-KB2919355-x64 and then install. 

Please note installation of Windows8.1-KB2919355-x64 will take some time.


Thanks for reading..


Failed to save Transparent Data Encryption settings for SQL resource

Failed to save Transparent Data Encryption settings for SQL resource: myserver0102. Error message: The provided Key Vault URI 'https://mypersonalXXXXvault01.vault.azure.net/keys/XXXXXX/283d045477e04fdab5c0055be37c0eee' is not valid. Please ensure the key vault has been configured with soft-delete.


in order to solve this issue.. go to cloud shell and execute below command




($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "mypersonalXXXXvault01").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true"

Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties


and you are done.

This issue will get resolved. 

Different components of application gateway

    Different Components of Application Gateway 



Application gateway and its different capabilities.
Application gateway offers layer 7 load balancing capabilities for HTTP and HTTPs traffic.

 and when you compare the application gateway with the

load balancer, Load balancer offers layer 4 load balancing capabilities.

Whereas application gateway offers layer 7 load balancing capability.

However the load balancer can distribute different type of traffic whereas application gateway can distribute only HTTP and HTTPs traffic.

And one other difference between External load balancer and application gateway is application gateway always resides within the virtual network whereas load balancer you can choose whether it should be inside virtual network or outside virtual network.


And in terms of components of application gateway,
1.FrontEnd IP Configuration --> Application gateway  has a frontend IP configuration.
They are basically IP addresses to which the traffic will come to.
2.Backend pool ---> which basically contains pool of IP addresses where the traffic
will be destined to.
3.listeners--> Listeners are basically listens to the traffic that is coming to a particular port.
In this case either 80 or 443 for HTTPs traffic and rules are something that will map this listeners to the backend pool. So it will basically map the incoming traffic to a particular destination pool
4.Health probe--> which will basically monitor the health of the backend pool machines
5.HTTP settings which will define whether we should use cookie based session affinity or which  port in the backend pool that the traffic needs to be routed to and all those stuff.
6.web application firewall which can be used to protect your web application from some
common web attacks.

So these are the components of application gateway.

Let's go through some of the capabilities of application gateway.
 In terms of capabilities,
Capabilities


  1. HTTPS Load Balancing -- It can load balance HTTP or HTTPs traffic
  2. Web Application Firewall -- web application firewall to protect your web application against common web attacks
  3. Cookie based Session affinity -- you can use cookie based session infinity in order to route all the user session traffic to a particular backend server throughout the user session.
  4. SSL offload-- If you want to offload the SSL traffic at the application gateway level you can configure the application gateway to achieve it.
  5. URL based content routing->If you want to route your traffic based on the URL then you'll be able to do the same using application gateway.
  6. Multi-site routing -> if you want to host multiple sites on a single public IP address you can achieve the same using application gateway. Basically you can configure the application gateway in such a way based on the domain name.It will route the traffic to a particular backend pool.
  7. Health monitoring you can monitor the health of your backend virtual machines by configuring a health probe in application gateway.


So these are the different capabilities of application gateway.

Autoscaling public preview

In addition to the features described in this article, Application Gateway also offers a public preview of a new SKU [Standard_V2], which offers auto scaling and other critical performance enhancements.

Autoscaling - Application Gateway or WAF deployments under the autoscaling SKU can scale up or down based on changing traffic load patterns. Autoscaling also removes the requirement to choose a deployment size or instance count during provisioning.

Zone redundancy - An Application Gateway or WAF deployment can span multiple Availability Zones, removing the need to provision and spin separate Application Gateway instances in each zone with a Traffic Manager.

Static VIP - The application gateway VIP now supports the static VIP type exclusively. This ensures that the VIP associated with application gateway does not change even after a restart.

Faster deployment and update time as compared to the generally available SKU.

5X better SSL offload performance as compared to the generally available SKU.

Demo
1.How to load balance HTTP traffic using Application Gateway.
https://docs.microsoft.com/en-us/azure/application-gateway/quick-create-portal
2.How to configure application gateway to achieve URL based content routing.
https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-create-url-route-portal
3.How to configure Application gateway for hosting multi site routing
4. How to Enable web application firewall on a Application Gateway and Simulate an Attack

to check whether your web application firewall is securing your web application against excesses attacks etc..

Scripts to configure Application gateway using Terraform <Coming soon>


Connect on Premise Network to Azure - Site to Site VPN Configuration using PowerShell

Login-AzureRmAccount


#create our base variables for our Resource Group
$rgName="RakAzureDC"
$locName="West Europe"
$saName="rakserverssa" #must be lower case
$vnetName="RakoNetAzure"

New-AzureRmResourceGroup -Name $rgName -Location $locName

 #Test-AzureName -Storage $saName

$saType="Standard_GRS"

New-AzureRmStorageAccount -Name $saName -ResourceGroupName $rgName –Type $saType -Location $locName

#Create Networking Components
#It's important to create one subnet named specifically GatewaySubnet. If you name it something else, our connection configuration will fail.
$Subnet=New-AzureRmVirtualNetworkSubnetConfig -Name Azure-Vnet-01 -AddressPrefix 10.10.10.0/27
$GatewaySubnet = New-AzureRmVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.10.10.32/29
New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $locName -AddressPrefix 10.10.10.0/24 -Subnet $Subnet,$GatewaySubnet -DnsServer 10.10.10.4,192.168.1.10

Get-AzureRmVirtualNetwork  -name $vnetName -ResourceGroupName $rgName | select subnets

$subnetIndex=0
$vnet=Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName

$nicName= "Internal"
$staticIP="10.10.10.4"

#add a public IP address via $pip so we can connect to it if we need to
$pip = New-AzureRmPublicIpAddress -Name $nicName -ResourceGroupName $rgName -Location $locName -AllocationMethod Dynamic
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $locName -SubnetId $vnet.Subnets[$subnetIndex].Id -PublicIpAddressId $pip.Id -PrivateIpAddress $staticIP





# don't know what VM sizes we have, so lets take a look
Get-AzureRmVMSize -Location $locName | Select Name

#name and size our Domain Controller
$vmName="AZURE-DC01"
$vmSize="Standard_A2"
$vm=New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize


$pubName="MicrosoftWindowsServer"
$offerName="WindowsServer"
$skuName="2012-R2-Datacenter"


$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vm=Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
$vm=Set-AzureRmVMSourceImage -VM $vm -PublisherName $pubName -Offer $offerName -Skus $skuName -Version "latest"
$vm=Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$diskName="OSDisk"
$storageAcc=Get-AzureRmStorageAccount -ResourceGroupName $rgName -Name $saName
$osDiskUri=$storageAcc.PrimaryEndpoints.Blob.ToString() + "vhds/" + $diskName + ".vhd"
$vm=Set-AzureRmVMOSDisk -VM $vm -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage
New-AzureRmVM -ResourceGroupName $rgName -Location $locName -VM $vm

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


#####################
#Provision Network




#add our local Network site
#Name Nickname for our on-premises network
#NewGatewayIPAddress is the IP address of your on-premises VPN
#AddressPrefix is your on-premises address space.


New-AzureRmLocalNetworkGateway -Name RakNetOnPremises -ResourceGroupName $rgName -Location $locName -GatewayIpAddress '122.167.33.81' -AddressPrefix '192.168.1.0/24'


#request a public IP address for the gateway

$gwpip= New-AzureRmPublicIpAddress -Name gwpip -ResourceGroupName $rgName -Location $locName -AllocationMethod Dynamic

#create the gateway IP addressing configuration

$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName
$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -VirtualNetwork $vnet
$gwipconfig = New-AzureRmVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id

#create the gateway - may wait a while

New-AzureRmVirtualNetworkGateway -Name vnetgw1  -ResourceGroupName $rgName -Location $locName -IpConfigurations $gwipconfig -GatewayType Vpn -VpnType RouteBased

#https://azure.microsoft.com/en-us/documentation/articles/vpn-gateway-create-site-to-site-rm-powershell/#7-configure-your-vpn-device

#Get the public IP address for the next step of building our connection script for RRAS either via powershell or via the Portal

Get-AzureRmPublicIpAddress -Name gwpip -ResourceGroupName $rgName


#BUILD our RRAS Configuration

$gateway1 = Get-AzureRmVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName $rgName

$local = Get-AzureRmLocalNetworkGateway -Name RakNetOnPremises -ResourceGroupName $rgName

New-AzureRmVirtualNetworkGatewayConnection -Name RakoToAzureVPN -ResourceGroupName $rgName -Location $locName -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123'

Now you need to configure RRAS Server

After configuration of RRAS Server, try to connect.

StackDriver multiple choice Questions in GCP

1.Which of these is NOT a Stack-driver product?
A. Performance
B. Error Reporting
C. Trace
D. Debug

Ans: A
Explanation :-

 https://cloud.google.com/stackdriver/docs/

~~~~~~~~~~~
2.What platforms can Stack driver natively monitor? Choose all that apply.
A. GCP
B. AWS
C. Azure
D. Openstack

Ans : A and B

Explanation :-
https://cloud.google.com/monitoring/docs/

~~~~~~~~~~~~~~
3.What IAM roles are necessary to view Admin Activity logs? Choose all that apply.
A. Logging/Private Logs Viewer
B. Project Owner
C. Project Viewer
D. Logging/Logs Viewer

Ans:- A and D

Expl:- https://cloud.google.com/iam/docs/roles-audit-logging


~~~~~~~~~~~~~
4.Logs can be exported to which services? Choose that apply.
A. Pub/Sub
B. Cloud Storage
C. BigQuery
D. Cloud SQL
~~~~~~~~~~~~~~~~~
Ans:- A B and C
https://cloud.google.com/logging/docs/export/


5.What is the retention period for audit data access activity logs?
A. 7
B. 400
C. 30
D. 100

Ans : C

Explanation : https://cloud.google.com/logging/docs/audit/


~~~~~~~~~~~~~~~~~~~~~
6.Admin activity logs are retained for ___ days.
A. 7
B. 30
C. 100
D. 400

Ans : D

Explanation:- https://cloud.google.com/logging/docs/audit/

~~~~~~~~~~~~~~~~
7.If external auditors need to be able to access your admin activity logs once a year for compliance, what is the best method of preserving and sharing that log data?

A. Export logs to Cloud Storage bucket, and email a list of the logs once per year.
B. Create GCP accounts for the auditors and grant the Project Viewer role to view logs in Stackdriver Logging
C. Export logs to a Cloud Storage bucket for long-term retention and grant auditor accounts the Storage Object Viewer role to the bucket.
D. Share long-term account with them so they can access the records.

Ans :- C

what is Storage Object Viewer & Project viewer Role.
Ans:
https://cloud.google.com/storage/docs/access-control/iam-roles


~~~~~~~~~~~~~~~~~

8.What Compute service is most tightly integrated with Stackdriver Trace, Debugger, and Error Reporting?
A. App Engine Standard
B. Compute Engine
C. App Engine Flexible
D. Kubernetes Engine



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9.How many days do you have to recover logs past their retention period?
A. None, deleted logs past retention date are not recoverable
B. 5
C. 7
D. 30

Ans A
~~~~~~~~~~~~~~~~~~~~~~~


10.What is the preferred method of custom organization of resources in Stackdriver Monitoring?
A. Units
B. Groups
C. Instances
D. Events

Ans B
~~~~~~~~~~~~~~~~~

11.GCP Stackdriver can use for GCP as well as AWS services

1. True
2. False

Ans 1
~~~~~~~~~~
12.Stackdriver Monitoring can be used for alerting - what steps involved in alerting? Choose all that apply.
1. Monitoring Conditions
2. Notifications
3. report incident
4. Aggregation



~~~~~~~~~~~~~~~~~~~~~~
13. What is difference between custom matrices & built-in metrics




14.Stackdriver Monitoring can be used for custom metrices
1. True
2. False


Ans 1.
~~~~~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~
16.Stackdriver basic tier can have following options

 1.GCP services
 2.GCP and AWS Services
 3.Audit logs 30 days
 4. other logs 7 days

 ~~~~~~~~~~~~~~~~~~
17. Please refer to Dress4win Case Study to answer the following question.The company has custom Monitoring/ Logging services, Infrastructure, and incident management mechanism. IT Operations has concerns if this will be satisfied when in Google Cloud Platform. What services can they use on Google Cloud Platform?

 1.use AWS Services for Monitoring and logging
 2. Stackdriver logging can be used with third party tools for alerting
 3.Install existing tools for monitoring and logging
 4. stackdriver logging and monitoring can be used for all GCP services
~~~~~~~~~~~~~~~~~~~~
18. You can do following things using Stackdriver Logging except (one answer)

1.Custom Monitoring Metrices
2. Log search
3. log alerts
4. Custom logs
~~~~~~~~~~~~~~
19.You can monitor HTTP load balancer using StackDriver Monitoring

1. True
2. False
~~~~~~~~~~~~~~~~
Ans:- 1.

20.Default period for log retention for Stackdriver logging is (select one)

1. 30 days
2. 1 year
3. 7 days
4. 90 days


~~~~~~~~~~~~~~~~~
21. Please refer to JencoMart Case Study to answer the following question.JencoMart wants to monitor infrastructure using Stackdriver Monitoring but some of their services are stored on Amazon AWS has Cloudwatch. What is the best suitable solution to monitor using a single solution? (select one)

1. use custom Solution and there is no solution in from GCP or AWS can do both platfrom monitoring
2.Stackdriver Monitoring and logging supports AWS and GCP
3.AWS cloudwatch can monitor resources on AWS, GCP and Azure
4. Cloud Bigquery to integrate with AWS Cloud watch and StackDriver
~~~~~~~~~~~~~~~~~~
22.Which is not the policy used by autoscaler for scaling? (select one)

1. CPU Utlization
2. Load Balancing Services capacity
3. StackDriver Monitoring Metrices
4. Memory Utilization
5. Cloud Pub/Sub

Ans: 4

~~~~~~~~~~~~~
23.True or False: You can not integrate with Stackdriver monitoring with Cloud SQL
1. True
2. False

Ans:- 1
~~~~~~~~~~~~~

24. Which of the following application can be used to stream logging for Stackdriver Logging

1. Cloud SQL
2. Cloud pub/sub
3. Cloud Storage
4. Cloud BigQuery

Ans : 2

~~~~~~~~~~~~~~~~
25.Which of the following services can not be used as sinks out of the box for Stackdriver Logging

1.Cloud SQL
2.Cloud pub/sub
3.Cloud Storage
4.Cloud Bigquery

Ans: except 4


~~~~~~~~~~~~~~~~~~~~~~
26.Which of the following service use can be use for Stackdriver Monitoring, Logging, Error reporting, Trace& debug
1. Cloud container Engine
2. Cloud App Engine
3. Cloud Function
4.Cloud Virtual Machine
 ~~~~~~~~~~~~~~~~~
27.True:False Stackdriver trace is free service but only used for app engine out of default
1. True
2.False

2 false


~~~~~~~~~~~~~~
28.True:False Stackdriver Debugger can be used for Cloud Storage

1. True
2. False

Ans:-  false

exp:https://cloud.google.com/debugger/docs/setup/
~~~~~~~~~~~~~~
29. Which of the following application can provide information on "Why App engine application taking so long to handle request"

1. SD Logging
2. SD Monitoring
3. SD trace
4. SD Error Reporting
5. SD Debugger

Ans: 3

~~~~~~~~~~~~~~~~
30. Stackdriver Trace can be used with following applications

1. App Engine
2. HTTPS Load Balancer
3. Cloud CDN
4. Cloud VPN
5. Application use SD SDK

Ans:- 1,2&3
https://cloud.google.com/trace/docs/overview
























An Overview of Google App Engine
1.Google App Engine is __.

A. Is a Sofware as a Service platform that allows you install applications on the fly

B. The fastest way to get up and running on the Google Cloud which falls into the Platform as a Service (PaaS) category. It offers a global infrastructure that will scale load as well as scale up and down on demand as needed.


C. is an Infrastructure as a Service (IaaS) that allows us to deploy and manage instances.

D. A LAMP server that allows you to run locally your own applications.

Ans : 2

2.Google App Engine allows you to determine the geographic region to where you want your application deployed based on the regions in which GAE currently has available. Some of the regions currently available are:

A. Sydney
B. Montreal
C. South Carolina
E. Mumbai
F. Tokyo

Ans:- https://cloud.google.com/appengine/docs/locations


3.All services in Google App Engine do not have setup fees except for what service?

A. NO SQL Data Storage
B. Load Balancers
C. Cloud Storage
D. None of the Above - because there is no setup charge for using Google App Engine services.

Ans: D

4. The two types of environments that you can deploy apps within google app engine primarily are?

A. Flexible Environment

B. Enhanced Environment

C. Standard Environment

done Correct
D. Quick Environments

5.Not all of the supported languages for code are supported within the Standard and Flexible environment. Which of the following languages are only supported in the Flexible environment?

A. Go
B. Any Linux compatible Ruby package
done Correct
C. PHP
D. .Net Core
  done Correct
E. Any Linux compatible Node.js package
  done Correct
6.Google App Engine is able to closely integrate with the Cloud AI service because of what method?

A. 3rd party tools

B. By creating a Virtual machine that will act as a man-in-the-middle between the two services.

C. Google Cloud Storage Service

D. API hooks
done Correct

7.Google App Engine allows you to integrate with these other services.

A. Cloud Storage

done Correct
B. Cloud Datastore

done Correct
C. BigQuery

done Correct
D. Compute Engine

Ans : Except option D that is Compute Engine, all ie. Cloud Storage, datastore and bigquery integrate with Google App engine.

8.Google App Engine is a fully managed Platform as a Service (PaaS). Which of the following are among its benefits? (Check all that apply.)

A. Memory is automatically allocated.

done Correct
B. Google App Engine allows you to manage your own virtual machine and Operating System.

C. Instances automatically scale up and down.

done Correct
D. You can focus on the code.
done Correct
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

9.What are some of the best possible use cases for Google App Engine?

A. Mobile Apps
B. Websites
C. Game Development
D. Line of Business Apps
E. Machine Learning

Ans:- Except Machine Learning, Google App Engine use cases are in Mobile Apps,Games Development,Websites,Line of Business Apps.
~~~~~~~~~~~~~~~
10.You have deployed your application to the Tokyo region. You now want to change the region from Tokyo to a US-based region. Which of the following regions can you change to?

A. Any US-based Region.

B. Sydney

C. Mumbai

D. You cannot change regions once you deploy to any region.

Ans : Answer D is Correct because once you create a App engine in any environment, you can not change the env on the fly.

Thanks for the reading..   

Professional Google -Cloud-Architect Questions


1.Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?


A. Grant the security team access to the logs in each Project.
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery.
C. Configure Stackdriver Monitoring for all Projects with the default retention policies.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.

Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?


A. Grant the security team access to the logs in each Project. 
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery. 
C. Configure Stackdriver Monitoring for all Projects with the default retention policies.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage*.

Ans is D.
why not B?


2.You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?
A. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service.
B. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url
C. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.
D. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.



You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?
A. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service.
B. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url
C. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.
D. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.*



3.The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis.
The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A. * Append metadata to file body.
* Compress individual files.
* Name files with a random prefix pattern.
* Save files to one bucket
B. * Batch every 10,000 events with a single manifest file for metadata.
* Compress event files and manifest file into a single archive file.
* Name files using serverName-EventSequence.
* Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C. * Compress individual files.
* Name files with serverName-EventSequence.
* Save files to one bucket
* Set custom metadata headers for each object after saving.
D. * Append metadata to file body.
* Compress individual files.
* Name files with serverName-Timestamp.
* Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket

The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis.
The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A. * Append metadata to file body.*
* Compress individual files.
* Name files with a random prefix pattern.
* Save files to one bucket
B. * Batch every 10,000 events with a single manifest file for metadata.
* Compress event files and manifest file into a single archive file.
* Name files using serverName-EventSequence.
* Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C. * Compress individual files.
* Name files with serverName-EventSequence.
* Save files to one bucket
* Set custom metadata headers for each object after saving.
D. * Append metadata to file body.
* Compress individual files.
* Name files with serverName-Timestamp.
* Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket

https://cloud.google.com/storage/docs/request-rate


Avoid using sequential filenames such as timestamp-based filenames if you are uploading many files in parallel. Because files with sequential names are stored consecutively, they are likely to hit the same backend server, meaning that throughput will be constrained. In order to achieve optimal throughput, you can add the hash of the sequence number as part of the filename to make it non-sequential.
https://cloud.google.com/storage/docs/best-practices



4.Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99.99% availability SLA under these conditions However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load. What should you do?
A. Expose the new system to a larger group of users, and increase group ' size each day until autoscale logic is tnggered on all layers. At the same time, terminate random resources on both zones.
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
C. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones.
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing users usage of the app, and deploy enough resources to handle 200% of expected load.



Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99.99% availability SLA under these conditions However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load. What should you do?
A. Expose the new system to a larger group of users, and increase group ' size each day until autoscale logic is tnggered on all layers. At the same time, terminate random resources on both zones.
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
C. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones.*
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing users usage of the app, and deploy enough resources to handle 200% of expected load.


5.You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.

You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.*
D. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.



6.To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take? Choose 2 answers

A. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM.
B. Use the --no-auto-delete flag on all persistent disks and stop the VM.
C. Apply VM CPU utilization label and include it in the BigQuery billing export.
D. Use Google BigQuery billing export and labels to associate cost to groups.
E. Use the -auto-delete flag on all persistent disks and terminate the VM.
F. Store all state into local SSD, snapshot the persistent disks, and terminate the VM.


To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take? Choose 2 answers

A. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM.
B. Use the --no-auto-delete flag on all persistent disks and stop the VM.*
C. Apply VM CPU utilization label and include it in the BigQuery billing export.*
D. Use Google BigQuery billing export and labels to associate cost to groups.
E. Use the -auto-delete flag on all persistent disks and terminate the VM.
F. Store all state into local SSD, snapshot the persistent disks, and terminate the VM.

Link:-
https://cloud.google.com/sdk/gcloud/reference/compute/instances/set-disk-auto-delete
https://cloud.google.com/billing/docs/how-to/export-data-bigquery

7.During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future. What should you do?
A. Create snapshots of your database more regularly.
B. Implement routinely scheduled failovers of your databases.
C. Choose larger instances for your database.
D. Use a different database.

During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future. What should you do?
A. Create snapshots of your database more regularly.
B. Implement routinely scheduled failovers of your databases.*
C. Choose larger instances for your database.
D. Use a different database.

8.Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live-processing some data as it comes in. Which technology should they use for this?
A. Google Cloud Dataflow
B. Google Compute Engine with Google BigQuery
C. Google Container Engine with Bigtable
D. Google Cloud Dataproc

Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live-processing some data as it comes in. Which technology should they use for this?
A. Google Cloud Dataflow*
B. Google Compute Engine with Google BigQuery
C. Google Container Engine with Bigtable
D. Google Cloud Dataproc

9.A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?
A. Meet with the cloud operations team and the engineer to discuss load balancer options.
B. Help the engineer to convert his websocket code to use HTTP streaming.
C. Review the encryption requirements for websocket connections with the security team.
D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.


A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?
A. Meet with the cloud operations team and the engineer to discuss load balancer options.*
B. Help the engineer to convert his websocket code to use HTTP streaming.
C. Review the encryption requirements for websocket connections with the security team.
D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.


Explanation:-
HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability.
The load balancer does not need any additional configuration to proxy WebSocket connections.


10.Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer


Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer*

11.The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? Choose 3 answers.
A. Port the application code to run on Google App Engine
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable


The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? Choose 3 answers.
A. Port the application code to run on Google App Engine*
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure*
E. Deploy a continuous integration tool with automated testing in a staging environment*
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable

12.Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? Choose 2 answers.
A. Compute Engine with containers
B. Google Container Engine with containers
C. Google App Engine Standard Environment
D. Compute Engine with custom instance types
E. Compute Engine with managed instance groups

Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? Choose 2 answers.
A. Compute Engine with containers
B. Google Container Engine with containers*
C. Google App Engine Standard Environment*
D. Compute Engine with custom instance types
E. Compute Engine with managed instance groups


13.
A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network's origin.
What should you do?
A. Search for Create VM entry in the Stackdriver alerting console
B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list

A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network's origin.
What should you do?
A. Search for Create VM entry in the Stackdriver alerting console
B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry*
D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list