1.Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?
A. Grant the security team access to the logs in each Project.
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery.
C. Configure Stackdriver Monitoring for all Projects with the default retention policies.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?
A. Grant the security team access to the logs in each Project.
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery.
C. Configure Stackdriver Monitoring for all Projects with the default retention policies.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage*.
Ans is D.
why not B?
2.You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?
A. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service.
B. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url
C. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.
D. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.
You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?
A. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service.
B. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url
C. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.
D. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.*
3.The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis.
The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A. * Append metadata to file body.
* Compress individual files.
* Name files with a random prefix pattern.
* Save files to one bucket
B. * Batch every 10,000 events with a single manifest file for metadata.
* Compress event files and manifest file into a single archive file.
* Name files using serverName-EventSequence.
* Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C. * Compress individual files.
* Name files with serverName-EventSequence.
* Save files to one bucket
* Set custom metadata headers for each object after saving.
D. * Append metadata to file body.
* Compress individual files.
* Name files with serverName-Timestamp.
* Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket
The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis.
The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A. * Append metadata to file body.*
* Compress individual files.
* Name files with a random prefix pattern.
* Save files to one bucket
B. * Batch every 10,000 events with a single manifest file for metadata.
* Compress event files and manifest file into a single archive file.
* Name files using serverName-EventSequence.
* Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C. * Compress individual files.
* Name files with serverName-EventSequence.
* Save files to one bucket
* Set custom metadata headers for each object after saving.
D. * Append metadata to file body.
* Compress individual files.
* Name files with serverName-Timestamp.
* Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket
https://cloud.google.com/storage/docs/request-rate
Avoid using sequential filenames such as timestamp-based filenames if you are uploading many files in parallel. Because files with sequential names are stored consecutively, they are likely to hit the same backend server, meaning that throughput will be constrained. In order to achieve optimal throughput, you can add the hash of the sequence number as part of the filename to make it non-sequential.
https://cloud.google.com/storage/docs/best-practices
4.Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99.99% availability SLA under these conditions However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load. What should you do?
A. Expose the new system to a larger group of users, and increase group ' size each day until autoscale logic is tnggered on all layers. At the same time, terminate random resources on both zones.
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
C. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones.
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing users usage of the app, and deploy enough resources to handle 200% of expected load.
Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99.99% availability SLA under these conditions However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load. What should you do?
A. Expose the new system to a larger group of users, and increase group ' size each day until autoscale logic is tnggered on all layers. At the same time, terminate random resources on both zones.
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
C. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones.*
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing users usage of the app, and deploy enough resources to handle 200% of expected load.
5.You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?
A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?
A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.*
D. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
6.To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take? Choose 2 answers
A. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM.
B. Use the --no-auto-delete flag on all persistent disks and stop the VM.
C. Apply VM CPU utilization label and include it in the BigQuery billing export.
D. Use Google BigQuery billing export and labels to associate cost to groups.
E. Use the -auto-delete flag on all persistent disks and terminate the VM.
F. Store all state into local SSD, snapshot the persistent disks, and terminate the VM.
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take? Choose 2 answers
A. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM.
B. Use the --no-auto-delete flag on all persistent disks and stop the VM.*
C. Apply VM CPU utilization label and include it in the BigQuery billing export.*
D. Use Google BigQuery billing export and labels to associate cost to groups.
E. Use the -auto-delete flag on all persistent disks and terminate the VM.
F. Store all state into local SSD, snapshot the persistent disks, and terminate the VM.
Link:-
https://cloud.google.com/sdk/gcloud/reference/compute/instances/set-disk-auto-delete
https://cloud.google.com/billing/docs/how-to/export-data-bigquery
7.During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future. What should you do?
A. Create snapshots of your database more regularly.
B. Implement routinely scheduled failovers of your databases.
C. Choose larger instances for your database.
D. Use a different database.
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future. What should you do?
A. Create snapshots of your database more regularly.
B. Implement routinely scheduled failovers of your databases.*
C. Choose larger instances for your database.
D. Use a different database.
8.Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live-processing some data as it comes in. Which technology should they use for this?
A. Google Cloud Dataflow
B. Google Compute Engine with Google BigQuery
C. Google Container Engine with Bigtable
D. Google Cloud Dataproc
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live-processing some data as it comes in. Which technology should they use for this?
A. Google Cloud Dataflow*
B. Google Compute Engine with Google BigQuery
C. Google Container Engine with Bigtable
D. Google Cloud Dataproc
9.A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?
A. Meet with the cloud operations team and the engineer to discuss load balancer options.
B. Help the engineer to convert his websocket code to use HTTP streaming.
C. Review the encryption requirements for websocket connections with the security team.
D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run property on Google Cloud Platform. What should you do?
A. Meet with the cloud operations team and the engineer to discuss load balancer options.*
B. Help the engineer to convert his websocket code to use HTTP streaming.
C. Review the encryption requirements for websocket connections with the security team.
D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
A. Meet with the cloud operations team and the engineer to discuss load balancer options.*
B. Help the engineer to convert his websocket code to use HTTP streaming.
C. Review the encryption requirements for websocket connections with the security team.
D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability.
The load balancer does not need any additional configuration to proxy WebSocket connections.
10.Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer*
11.The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? Choose 3 answers.
A. Port the application code to run on Google App Engine
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? Choose 3 answers.
A. Port the application code to run on Google App Engine*
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure*
E. Deploy a continuous integration tool with automated testing in a staging environment*
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
12.Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? Choose 2 answers.
A. Compute Engine with containers
B. Google Container Engine with containers
C. Google App Engine Standard Environment
D. Compute Engine with custom instance types
E. Compute Engine with managed instance groups
Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? Choose 2 answers.
A. Compute Engine with containers
B. Google Container Engine with containers*
C. Google App Engine Standard Environment*
D. Compute Engine with custom instance types
E. Compute Engine with managed instance groups
13.
A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network's origin.
What should you do?
A. Search for Create VM entry in the Stackdriver alerting console
B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list
A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network's origin.
What should you do?
A. Search for Create VM entry in the Stackdriver alerting console
B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry*
D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list