Associate-Cloud-Engineer Practice Test Questions

113 Questions


You recently deployed a new version of an application to App Engine and then discovered a bug in the release. You need to immediately revert to the prior version of the application. What should you do?


A. Run gcloud app restore.


B. On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert.


C. On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version.


D. Deploy the original version as a separate application. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests.





C.
  On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version.

Summary:
App Engine maintains all previously deployed versions of your application. To revert a buggy release, you don't need to redeploy the old code; you simply need to change which version is receiving all of the user traffic. This is done by modifying the traffic splitting configuration. In the GCP Console, you can navigate to the Versions page and immediately shift 100% of traffic to the stable, previous version, making it the active version and effectively reverting the deployment.

Correct Option:

C. On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version.
This is the fastest and standard method for rolling back a release on App Engine. All versions remain deployed.

By moving 100% of the traffic to the previous, stable version, you instantly make that version the active one serving all user requests.

The new, buggy version remains deployed but receives no traffic, allowing you to debug it without impacting users. This action is immediate and requires no new deployments.

Incorrect Option:

A. Run gcloud app restore.
There is no gcloud app restore command for this purpose. The gcloud app versions delete command can remove old versions, but a "restore" function as described does not exist in the gcloud CLI for traffic rerouting.

B. On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert.
There is no "Revert" button on the main App Engine dashboard. The specific interface for managing traffic split between versions is located under the "Versions" page, not the general application page.

D. Deploy the original version as a separate application. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests.
This is completely incorrect and inefficient. You do not need to create a separate GCP project (a new "application") to revert a version. All versions are stored within the same project. Creating a new project and application would be a complex, time-consuming process and is not how version management and traffic splitting work in App Engine.

Reference:
Google Cloud Documentation: Splitting Traffic - https://cloud.google.com/appengine/docs/standard/python/allocating-costs#traffic_splitting

This documentation explains how to split traffic between versions. While the linked page discusses cost allocation, the mechanism is the same: you can send 100% of traffic to a specific version, which is the standard procedure for rolling back a deployment.

You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy?


A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 – 90)


B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days.


C. Use gsutil rewrite and set the Delete action to 275 days (365-90).


D. Use gsutil rewrite and set the Delete action to 365 days.





B.
  Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days.

Summary:
Cloud Storage Object Lifecycle Management is the service designed to automatically perform actions on objects based on predefined conditions, such as their age. The age for each action is always calculated from the object's creation date. Therefore, to move an object to Coldline after 90 days and delete it after one year, you configure two independent rules: one SetStorageClass action with an age of 90 days, and one Delete action with an age of 365 days.

Correct Option:

B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days.

Lifecycle Management:
This is the correct Google Cloud service for automating object transitions and deletions.

Age Calculation:
The Age condition in a lifecycle rule is always relative to the object's creation time. The actions are independent and do not chain based on previous actions.

Configuration:
A rule with SetStorageClass to COLDLINE and age of 90 will transition objects 90 days after they are created.

A separate rule with Delete and age of 365 will delete objects 365 days after they are created, regardless of their current storage class.

Incorrect Option:

A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 – 90)
This is incorrect because it misunderstands how the Age condition works. The delete action's age is not calculated from the date of the storage class change; it is calculated from the object's original creation date. Setting the delete action to 275 days would cause objects to be deleted 275 days after creation, which is long before the required one-year (365-day) retention period.

C. Use gsutil rewrite and set the Delete action to 275 days (365-90).
The gsutil rewrite command is used to change the storage class of existing objects immediately or to re-encrypt them. It is not used for scheduling future actions like deletion. It cannot be used to set up a lifecycle policy.

D. Use gsutil rewrite and set the Delete action to 365 days.
As with option C, the gsutil rewrite command is the wrong tool. It does not have a Delete action for scheduling. Lifecycle policies must be configured through the Console, gsutil lifecycle command, or the JSON API, not the rewrite command.

Reference:
Google Cloud Documentation: Object Lifecycle Management - https://cloud.google.com/storage/docs/lifecycle

This official documentation explains the concept and states: "Lifecycle conditions are based on the object's age." It provides examples of using the SetStorageClass and Delete actions with age-based conditions, confirming that the age for deletion is measured from the object's creation date.

Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do?


A. Contact cloud-billing@google.com with your bank account details and request a corporate billing account for your company.


B. Create a ticket with Google Support and wait for their call to share your credit card details over the phone.


C. In the Google Platform Console, go to the Resource Manage and move all projects to the root Organizarion.


D. In the Google Cloud Platform Console, create a new billing account and set up a payment method.





D.
  In the Google Cloud Platform Console, create a new billing account and set up a payment method.

Summary:
The core issue is decentralised billing, not resource organisation. The solution is to create a centralised, company-managed billing account within the Google Cloud Platform Console. Once this new billing account is created with a corporate payment method (like a company credit card or invoicing), you can then link each existing project to this new billing account. This consolidates all charges under a single payment method and provides the company with centralized cost management and control.

Correct Option:

D. In the Google Cloud Platform Console, create a new billing account and set up a payment method.
This is the foundational and correct first step. You must create a new billing account that represents the company's central payment entity.

During creation, you will set up the official corporate payment method (e.g., credit card, bank account) that will be used for all charges.

After this account is created, you can then proceed to change the billing account associated with each existing project to this new, central one, effectively consolidating all payments.

Incorrect Option:

A. Contact cloud-billing@google.com with your bank account details and request a corporate billing account for your company.
This is an incorrect and insecure process. Billing accounts are created and managed directly by users with the appropriate permissions within the GCP Console. You should never send sensitive bank details via email. The correct process is to create the account yourself in the console and then link the payment method securely through the provided interface.

B. Create a ticket with Google Support and wait for their call to share your credit card details over the phone.
This is also an incorrect and insecure procedure. Google Support engineers do not create billing accounts for you over the phone, nor should you share credit card information in this manner. The entire process is designed to be self-service and secure within the GCP Console.

C. In the Google Platform Console, go to the Resource Manager and move all projects to the root Organization.
This action deals with resource hierarchy and access control (IAM), not billing. Moving projects within an organization node changes their inheritance for policies and permissions but does not change what payment method is used to pay for them. Billing is a separate association that must be changed on the Billing section for each project.

Reference:
Google Cloud Documentation: Creating, modifying, or closing your Cloud Billing account - https://cloud.google.com/billing/docs/how-to/manage-billing-account

This official documentation outlines the process for creating a new billing account, which is the necessary first step to centralize payments. It also covers how to change a project's billing association.

You are running an application on multiple virtual machines within a managed instance group and have autoscaling enabled. The autoscaling policy is configured so that additional instances are added to the group if the CPU utilization of instances goes above 80%. VMs are added until the instance group reaches its maximum limit of five VMs or until CPU utilization of instances lowers to 80%. The initial delay for HTTP health checks against the instances is set to 30 seconds. The virtual machine instances take around three minutes to become available for users. You observe that when the instance group autoscales, it adds more instances then necessary to support the levels of end-user traffic. You want to properly maintain instance group sizes when autoscaling. What should you do?


A. Set the maximum number of instances to 1.


B. Decrease the maximum number of instances to 3.


C. Use a TCP health check instead of an HTTP health check.


D. Increase the initial delay of the HTTP health check to 200 seconds.





D.
  Increase the initial delay of the HTTP health check to 200 seconds.

Summary:
The problem is that the autoscaler adds too many instances because it makes scaling decisions based on CPU utilization before new VMs are ready to serve traffic. Since VMs take 3 minutes to become available, the autoscaler sees that CPU remains high (because the new VMs aren't helping yet) and continues to add more, causing over-provisioning. The solution is to align the health check's "initial delay" with the VM's true startup time. This prevents the autoscaler from considering a new VM's unhealthy period in its metrics, allowing time for the new VMs to become operational and reduce load before further scaling decisions are made.

Correct Option:

D. Increase the initial delay of the HTTP health check to 200 seconds.
The initial delay is the time the managed instance group (MIG) waits after a VM is created before beginning to apply its health check. During this period, the VM is not considered "healthy" even if it is still booting.

Crucially, the autoscaler often ignores VMs that are not yet healthy when calculating the average CPU utilization for the group.

By increasing the initial delay to ~200 seconds (slightly more than the 3-minute boot time), you ensure that new VMs are only considered by the autoscaler after they are fully booted and ready to accept user traffic. This gives them a chance to actually reduce the load, providing an accurate signal to the autoscaler and preventing premature creation of additional instances.

Incorrect Option:

A. Set the maximum number of instances to 1.
This directly contradicts the requirement to autoscale. A maximum of 1 instance prevents scaling out entirely, which would cause performance issues or outages during traffic spikes.

B. Decrease the maximum number of instances to 3.
While this would physically cap the over-provisioning, it is an artificial limit that doesn't solve the root cause. If genuine traffic requires 4 or 5 instances, this cap would prevent the application from scaling to meet that need, hurting performance. The goal is to make scaling smarter, not just more restricted.

C. Use a TCP health check instead of an HTTP health check.
A TCP health check only verifies if a port is open, which will happen early in the boot process, long before the application is fully initialized and ready for users. This would make the problem worse, as VMs would be marked "healthy" and included in autoscaling calculations even sooner, while they are still unable to reduce the load.

Reference:
Google Cloud Documentation: Health check initial delay - https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#initial_delay

This documentation explains the purpose of the initial delay: "When an instance in a managed instance group is created... the autohealing health checks are suspended for the initial delay period." While this text is for autohealing, the same initial delay setting also affects how the autoscaler perceives new instances. Configuring this delay correctly is critical for accurate autoscaling behavior.

You need to update a deployment in Deployment Manager without any resource downtime in the deployment. Which command should you use?


A. gcloud deployment-manager deployments create --config


B. gcloud deployment-manager deployments update --config


C. gcloud deployment-manager resources create --config


D. gcloud deployment-manager resources update --config





B.
  gcloud deployment-manager deployments update --config



Summary:
To update an existing Deployment Manager deployment without causing downtime, you must use the update command. This command instructs Deployment Manager to compute the differences between the current deployment state and the new configuration provided. It then creates, updates, or deletes resources in a specific order to transition to the new state while minimizing disruption, often leveraging update policies you can define in your configuration (like rolling updates for instance groups).

Correct Option:

B. gcloud deployment-manager deployments update --config
The gcloud deployment-manager deployments update command is specifically designed for this purpose. It is the primary method for making changes to an existing deployment.

It performs a declarative update: you provide the desired end-state configuration, and Deployment Manager determines the set of actions needed to achieve that state from the current one.

When configured correctly with update policies (e.g., for a managed instance group), it can perform rolling updates or create new resources before deleting old ones, ensuring the application remains available throughout the process.

Incorrect Option:

A. gcloud deployment-manager deployments create --config
This command is used to create a new, separate deployment. If you run this with the name of an existing deployment, it will fail because the deployment name must be unique. It cannot be used to update an existing deployment.

C. gcloud deployment-manager resources create --config
This command is invalid. The resources subgroup in the gcloud deployment-manager command set is used for operations like list or describe to view existing resources, not for creating or updating deployments. Deployments are managed as a whole unit, not as individual resources.

D. gcloud deployment-manager resources update --config
This command is also invalid. There is no resources update command. Updates are always performed at the deployment level using deployments update, which allows Deployment Manager to manage the complex interdependencies between resources safely.

Reference:
Google Cloud Documentation: Updating a deployment - https://cloud.google.com/deployment-manager/docs/updating-deployments

This official documentation explicitly states: "To update an existing deployment, use the gcloud deployment-manager deployments update command." It details how the update process works and how to manage it to avoid downtime.

You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver Monitoring dashboard. What should you do?


A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.


B. For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.


C. Configure a single Stackdriver account, and link all projects to the same account.


D. Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group.





C.
  Configure a single Stackdriver account, and link all projects to the same account.

Summary:
Stackdriver Monitoring (now Cloud Monitoring) is designed to monitor multiple projects from a single pane of glass. This is achieved by creating a single Stackdriver Workspace and then "linking" or "adding" other Google Cloud projects to it. A Workspace is tied to a single host project but can collect metrics, metadata, and alerting policies from all linked projects, consolidating them into unified dashboards and alerting channels without the need for complex networking or cross-project service account permissions.

Correct Option:

C. Configure a single Stackdriver account, and link all projects to the same account.
The correct terminology is to create a single Stackdriver Workspace in one host project. Once created, you can link additional Google Cloud projects to this workspace. This process authorizes the workspace to pull monitoring data from all the linked projects.

Once linked, you can create dashboards that display metrics from any of the projects, set up alerts based on conditions across projects, and view all your resources in a single, consolidated interface. This is the standard and supported method for multi-project monitoring.

Incorrect Option:

A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.
Shared VPC is a networking construct for sharing VPC networks across projects. It is unrelated to monitoring and does not enable Stackdriver to collect metrics from other projects. Stackdriver links projects at the IAM and API level, not through network connectivity.

B. For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.
This is an overly complex and non-standard approach. You do not need to create a separate Stackdriver account (workspace) for each project. Creating and managing multiple workspaces defeats the purpose of consolidation. Furthermore, cross-project service account permissions are not the mechanism for linking projects in Monitoring.

D. Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group.
This misunderstands the function of Groups in Stackdriver. Groups are used to dynamically collect resources (like VMs, databases) based on criteria like labels, zones, or resource names within the projects that are already linked to the workspace. You cannot add a project itself to a resource group; you must first link the project to the workspace before its resources can be included in any group or dashboard.

Reference:
Google Cloud Documentation: Workspaces overview - https://cloud.google.com/monitoring/workspaces/docs

This official documentation explains the concept of a Workspace and states: "A Workspace is the central place where you view monitoring information... A Workspace can monitor one or more Google Cloud projects." It details the process of creating a workspace and adding projects to it.

You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a public web application over HTTPS. You want to follow Google-recommended practices. What should you do?


A. Configure an HTTP(S) load balancer.


B. Configure an internal TCP load balancer.


C. Configure an external SSL proxy load balancer.


D. Configure an external TCP proxy load balancer





A.
  Configure an HTTP(S) load balancer.

Summary:
For a public web application served over HTTPS where you need to terminate the client SSL session (SSL Offloading), the Google-recommended practice is to use the global HTTP(S) Load Balancer. This load balancer is designed for this exact purpose: it terminates TLS (SSL) at the load balancer frontend and then forwards the traffic to your backend instance group over HTTP or HTTPS, providing a centralized point for SSL certificate management and advanced routing features.

Correct Option:

A. Configure an HTTP(S) load balancer.
The HTTP(S) Load Balancer is a global, layer 7 load balancer that is the primary choice for public web traffic.

It is designed to terminate SSL/TLS connections from clients. You install your SSL certificate on the load balancer, which handles the decryption.

This offloads the CPU-intensive SSL processing from your VMs to the load balancer, improving backend performance. It also simplifies certificate management and enables content-based routing, making it the recommended solution for this scenario.

Incorrect Option:

B. Configure an internal TCP load balancer.
This load balancer is for internal, private traffic within a VPC network (as per RFC 1918). It is not designed for or capable of serving public web traffic from the internet, which is a core requirement in this scenario.

C. Configure an external SSL proxy load balancer.
The SSL Proxy Load Balancer is a layer 4 load balancer that also terminates SSL. However, it is designed for non-HTTP SSL traffic, such as SSL for protocols like IMAP, SMTP, or custom TCP-based protocols that use SSL. For standard HTTPS web traffic, the HTTP(S) Load Balancer is the more feature-rich and recommended option.

D. Configure an external TCP proxy load balancer.
The TCP Proxy Load Balancer is also a layer 4 load balancer, but it is for non-SSL TCP traffic. It does not have the capability to terminate SSL/TLS sessions. If you used this, your backend VMs would have to handle the SSL termination themselves, which violates the requirement to have the load balancer terminate the session and is not the recommended practice.

Reference:
Google Cloud Documentation: HTTP(S) Load Balancing overview - https://cloud.google.com/load-balancing/docs/https

This official documentation describes the HTTP(S) Load Balancer and its capabilities, including SSL termination: "The load balancer terminates SSL sessions from your users... You can configure the load balancer to send traffic to your backends as HTTP or HTTPS." It is presented as the solution for public web serving.

Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance. What should you do?


A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance.


B. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance.


C. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this team.


D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.





D.
  Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.

Summary:
The requirements are: 1) Use existing Google accounts, 2) Grant only administrative (SSH) access, 3) Ensure operational efficiency (no manual key distribution), and 4) Have auditable access logs. The solution that satisfies all these is to leverage OS Login with IAM. OS Login ties SSH access directly to a user's Google identity and IAM permissions. By granting the compute.osAdminLogin role to a Google Group containing the team, you centrally manage access, and because users log in with their own identities, their access is automatically logged in Cloud Audit Logs.

Correct Option:

C. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this team.

Efficiency & Central Management:
Adding users to a Google Group and granting the IAM role (roles/compute.osAdminLogin) to the group is a one-time, centralized action. There is no need to manually deploy keys to any instances.

Auditability:
Because users SSH into instances using their own Google credentials (their public key is tied to their Google account), Cloud Audit Logs record the specific user's identity (email address) for every SSH connection attempt, fulfilling the security team's requirement.

Security:
This follows the principle of least privilege by granting only the necessary OS-level admin role. The private keys remain solely with each user.

Incorrect Option:

A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance.
This is highly insecure and non-auditable. Sharing a single private key among the entire team means you cannot determine who accessed an instance. It also creates a massive security risk; if one team member's device is compromised, the key must be rotated for everyone.

B. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance.
This is operationally inefficient. It requires manually collecting keys and maintaining a configuration management system to deploy and update authorized_keys files on every instance. It also provides weak auditing, as it's difficult to reliably map an SSH key back to a specific individual after the fact.

D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.
This suffers from the same critical flaws as option A. It uses a shared private key, making it impossible to audit individual user access and creating a significant security vulnerability. Project-wide keys are a legacy feature and are discouraged in favor of OS Login.

Reference:
Google Cloud Documentation: OS Login - https://cloud.google.com/compute/docs/instances/managing-instance-access

This official documentation explains that OS Login "lets you use IAM to manage SSH access to your instances." It directly addresses the key benefits: integrating with Google identities for authentication and providing centralized access management through IAM, which inherently provides audit trails.

You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended
project. You want to find out why this happened and where the application deployed. What should you do?


A.

Check the app.yaml file for your application and check project settings.


B.

Check the web-application.xml file for your application and check project settings.


C.

Go to Deployment Manager and review settings for deployment of applications.


D.

Go the Cloud Shell and run gcloud config list to review the Google Cloud configuration used for
deployment.





A.
  

Check the app.yaml file for your application and check project settings.



You are the project owner of a GCP project and want to delegate control to colleagues to manage buckets and
files in Cloud Storage. You want to follow Google-recommended practices. Which IAM roles should you
grant your colleagues?


A.

Project Editor


B.

Storage Admin


C.

Storage Object Admin


D.

Storage Object Creator





B.
  

Storage Admin



You have one project called proj-sa where you manage all your service accounts. You want to be able to use a
service account from this project to take snapshots of VMs running in another project called proj-vm. What
should you do?


A.

Download the private key from the service account, and add it to each VMs custom metadata.


B.

Download the private key from the service account, and add the private key to each VM’s SSH keys.


C.

Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.


D.

When creating the VMs, set the service account’s API scope for Compute Engine to read/write.





C.
  

Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.



You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps. What should you do?


A.

Use gcloud iam roles copy and specify the production project as the destination project.


B.

Use gcloud iam roles copy and specify your organization as the destination organization.


C.

In the Google Cloud Platform Console, use the ‘create role from role’ functionality.


D.

In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable
permissions.





B.
  

Use gcloud iam roles copy and specify your organization as the destination organization.




Page 2 out of 10 Pages
Previous