Google - Limited Time Discount Offer - Ends In 1d 00h 00m 00s Coupon code: Y2430OFF
  1. Home
  2. Google
  3. Professional-Cloud-DevOps-Engineer Dumps
  4. Free Professional-Cloud-DevOps-Engineer Questions

Free Professional Cloud DevOps Engineer Questions for Google Professional Cloud DevOps Engineer Exam as PDF & Practice Test Software

Page:    1 / 14   
Total 166 questions

Question 1

Your company runs applications in Google Kubernetes Engine (GKE). Several applications rely on ephemeral volumes. You noticed some applications were unstable due to the DiskPressure node condition on the worker nodes. You need to identify which Pods are causing the issue, but you do not have execute access to workloads and nodes. What should you do?



Question 2

You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices. How should you configure this pipeline with Binary Authorization?



Answer : B

The correct answer is B, Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity.

According to the Google Cloud documentation, Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE) or Cloud Run1. Binary Authorization uses attestations to certify that a specific image has completed a previous stage in the CI/CD pipeline, such as passing a load test2. Attestations are signed by private keys that are associated with attestors, which are entities that verify the attestations3. To follow Google-recommended practices, you should store your private keys in Cloud Key Management Service (Cloud KMS), which is a secure and scalable service for managing cryptographic keys4. You should also use Workload Identity, which is a feature that allows Kubernetes service accounts to act as Google service accounts, to authenticate to Cloud KMS and sign attestations without having to manage or expose service account keys5.

The other options are incorrect because they do not follow Google-recommended practices. Option A and option D require human intervention to sign the attestations, which is not scalable or automated. Option C exposes the service account JSON key as a Kubernetes Secret, which is less secure than using Workload Identity.


Creating an attestor, Creating an attestor. Cloud Key Management Service Documentation, Overview. Attestations overview, Attestations overview. Using Workload Identity with Binary Authorization, Using Workload Identity with Binary Authorization. Binary Authorization, Binary Authorization.

Question 3

You recently noticed that one Of your services has exceeded the error budget for the current rolling window period. Your company's product team is about to launch a new feature. You want to follow Site Reliability Engineering (SRE) practices.

What should you do?



Answer : A

The correct answer is

A, Notify the team that their error budget is used up. Negotiate with the team for a launch freeze or tolerate a slightly worse user experience.

According to the Site Reliability Engineering (SRE) practices, an error budget is the amount of unreliability that a service can tolerate without harming user satisfaction1. An error budget is derived from the service-level objectives (SLOs), which are the measurable goals for the service quality2. When a service exceeds its error budget, it means that it has violated its SLOs and may have negatively impacted the users. In this case, the SRE team should notify the product team that their error budget is used up and negotiate with them for a launch freeze or a lower SLO3. A launch freeze means that no new features are deployed until the service reliability is restored. A lower SLO means that the product team accepts a slightly worse user experience in exchange for launching new features. Both options require a trade-off between reliability and innovation, and should be agreed upon by both teams.

The other options are incorrect because they do not follow the SRE practices. Option B is incorrect because it violates the principle of error budget autonomy, which means that each service should have its own error budget and SLOs, and should not borrow or reallocate them from other services4. Option C is incorrect because it does not address the root cause of the error budget overspend, and may create unrealistic expectations for the service reliability. Option D is incorrect because it does not prevent the possibility of introducing new errors or bugs with the feature launch, which may further degrade the service quality and user satisfaction.


Error Budgets, Error Budgets. Service Level Objectives, Service Level Objectives. Error Budget Policies, Error Budget Policies. Error Budget Autonomy, Error Budget Autonomy.

Question 4

You are monitoring a service that uses n2-standard-2 Compute Engine instances that serve large files. Users have reported that downloads are slow. Your Cloud Monitoring dashboard shows that your VMS are running at peak network throughput. You want to improve the network throughput performance. What should you do?



Answer : C

The correct answer is C, Change the machine type for your VMs to n2-standard-8.

According to the Google Cloud documentation, the network throughput performance of a Compute Engine VM depends on its machine type1. The n2-standard-2 machine type has a maximum egress bandwidth of 4 Gbps, which can be a bottleneck for serving large files. By changing the machine type to n2-standard-8, you can increase the maximum egress bandwidth to 16 Gbps, which can improve the network throughput performance and reduce the download time for users. You also need to enable per VM Tier_1 networking performance, which is a feature that allows VMs to achieve higher network performance than the default settings2.

The other options are incorrect because they do not improve the network throughput performance of your VMs. Option A is incorrect because Cloud NAT is a service that allows private IP addresses to access the internet, but it does not increase the network bandwidth or speed3. Option B is incorrect because adding additional network interfaces (NICs) or IP addresses per NIC does not increase ingress or egress bandwidth for a VM1. Option D is incorrect because deploying the Ops Agent can help you monitor and troubleshoot your VMs, but it does not affect the network throughput performance4.


Cloud NAT overview, Cloud NAT overview. Network bandwidth, Bandwidth summary. Installing the Ops Agent, Installing the Ops Agent. Configure per VM Tier_1 networking performance, Configure per VM Tier_1 networking performance.

Question 5

You are analyzing Java applications in production. All applications have Cloud Profiler and Cloud Trace installed and configured by default. You want to determine which applications need performance tuning. What should you do?

Choose 2 answers



Answer : A, D

The correct answers are A and D)

Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the CPU resource allocation. This is a good way to determine if the application is CPU-bound, meaning that it spends more time waiting for the CPU than performing actual computation. Increasing the CPU resource allocation can improve the performance of CPU-bound applications1.

Examine the latency time, the wall-clock time, and the CPU time of the application. If the latency time is slowly burning down the error budget, and the difference between wall-clock time and CPU time is minimal, mark the application for optimization. This is a good way to determine if the application is I/O-bound, meaning that it spends more time waiting for input/output operations than performing actual computation. Increasing the CPU resource allocation will not help I/O-bound applications, and they may need optimization to reduce the number or duration of I/O operations2.

Answer B is incorrect because increasing the memory resource allocation will not help if the application is CPU-bound or I/O-bound. Memory allocation affects how much data the application can store and access in memory, but it does not affect how fast the application can process that data.

Answer C is incorrect because increasing the local disk storage allocation will not help if the application is CPU-bound or I/O-bound. Disk storage affects how much data the application can store and access on disk, but it does not affect how fast the application can process that data.

Answer E is incorrect because examining the heap usage of the application will not help to determine if the application needs performance tuning. Heap usage affects how much memory the application allocates for dynamic objects, but it does not affect how fast the application can process those objects. Moreover, low heap usage does not necessarily mean that the application is inefficient or unoptimized.


Page:    1 / 14   
Total 166 questions