Associate-Cloud-Engineer Practice Test Questions

113 Questions


Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices. Which storage option should you use?


A.

Multi-Regional Storage


B.

Regional Storage


C.

Nearline Storage


D.

Coldline Storage





D.
  

Coldline Storage



Summary:
The scenario describes storing application backup files specifically for disaster recovery (DR). The key characteristic of DR backups is that they are rarely accessed, only being retrieved in the event of an actual disaster or during infrequent recovery drills. Google's recommended practice for this type of data is to use the most cost-effective storage class that meets the recovery time objective. Coldline Storage is optimized for data accessed less than once a year, making it the most cost-effective and appropriate choice for long-term DR backups.

Correct Option:

D. Coldline Storage
Coldline Storage is designed for long-term data archiving and disaster recovery, with a minimum storage duration of 90 days.

It offers the lowest storage cost among the options, which aligns with the goal of cost-effectively storing data that is rarely used.

While it has a cost for data access, this is a suitable trade-off for DR backups, as the frequency of access is expected to be extremely low. This directly follows Google's best practice of matching the storage class to the data's access pattern.

Incorrect Option:

A. Multi-Regional Storage
This storage class is designed for frequently accessed ("hot") data that needs to be distributed globally for high availability and low latency (e.g., website content, streaming video). It is the most expensive option and is overkill and cost-ineffective for backup files that sit idle.

B. Regional Storage
Regional Storage is for frequently accessed data within a single region. It is more expensive than Nearline or Coldline. Like Multi-Regional, it is not cost-optimized for the "write once, read rarely" pattern of DR backups.

C. Nearline Storage
Nearline Storage is a good choice for data accessed less than once a month (e.g., monthly backups for analysis). However, for pure disaster recovery purposes where access is expected to be even less frequent (less than once a year), Coldline Storage is the more cost-effective and recommended choice.

Reference:
Google Cloud Documentation: Storage classes - https://cloud.google.com/storage/docs/storage-classes

This official documentation provides the specific use cases for each storage class. It explicitly recommends Coldline for "Disaster recovery" and data stored for a long period of time, typically accessed no more than once a year. This matches the described scenario perfectly.

You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account. What should you do?


A.

When creating the VM via the web console, specify the service account under the ‘Identity and API
Access’ section.


B.

Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the
value for the key compute-engine-service-account.


C.

Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that
JSON as the value for the key compute-engine-service-account.


D.

Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save
the JSON under ~/.gcloud/compute-engine-service-account.json.





A.
  

When creating the VM via the web console, specify the service account under the ‘Identity and API
Access’ section.



Summary:
To ensure a VM uses a specific service account, the service account must be assigned to the VM instance during its creation. This is a fundamental configuration of the VM's identity. The Google Cloud Console provides a direct option to specify a service account in the "Identity and API access" section during the VM creation workflow. This is the recommended and most secure method, as it avoids handling private key files and allows the VM to automatically use the built-in metadata server for authentication.

Correct Option:

A. When creating the VM via the web console, specify the service account under the ‘Identity and API Access’ section.
This is the correct and Google-recommended practice. During creation, you can select the desired service account from a dropdown menu.

The platform automatically handles the identity binding, and the VM's internal metadata server will provide credentials for this specific service account to all applications running on it.

This method is secure, as it avoids the risk of exposing a private JSON key, and is the most straightforward to implement and manage.

Incorrect Option:

B. Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service-account.
This is incorrect and insecure. Project metadata is applied to all VMs in the project, which would incorrectly assign this identity to every instance. Furthermore, storing a sensitive private key in plain text within metadata is a severe security anti-pattern and is not the mechanism for assigning a service account.

C. Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account.
This is also incorrect. While VM-specific metadata is better than project-wide, the mechanism for assigning an identity is not through a custom metadata key containing a private key. The service account is a fundamental property of the VM instance, not a piece of custom metadata. Handling raw private keys is unnecessary and increases security risks.

D. Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.
This is an invalid method. The ~/.gcloud/ directory is used for user-based gcloud CLI configurations, not for service account authentication on a VM. The standard way for applications on a Compute Engine instance to authenticate is automatically through the metadata server, not by manually placing key files. This approach would not work as intended.

Reference:
Google Cloud Documentation: Service accounts for instances - https://cloud.google.com/compute/docs/access/service-accounts#associating_a_service_account_to_an_instance This official documentation states: "You can assign a service account to an instance when you create the instance. An instance can have only one service account assigned to it at a time." It directs users to the "Identity and API access" section in the Cloud Console to make this selection.

You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a
supported and stable version of Kubernetes. What should you do?


A.

Enable the Node Auto-Repair feature for your GKE cluster.


B.

Enable the Node Auto-Upgrades feature for your GKE cluster.


C.

Select the latest available cluster version for your GKE cluster.


D.

Select “Container-Optimized OS (cos)” as a node image for your GKE cluster





B.
  

Enable the Node Auto-Upgrades feature for your GKE cluster.



Summary:
To ensure a GKE cluster always runs a supported and stable version of Kubernetes, you must enable a process that automatically updates the cluster's control plane and nodes as new stable versions become available. While selecting a stable version at creation time is good, it is a one-time action. The "Node Auto-Upgrades" feature is the automated, Google-recommended solution that proactively keeps your cluster updated with minor version releases, which are vetted for stability and security, ensuring long-term support.

Correct Option:

B. Enable the Node Auto-Upgrades feature for your GKE cluster.
This feature automatically upgrades the node pools in your cluster to the latest stable minor version of Kubernetes. Google designates these versions as supported and stable.

It eliminates the manual effort and risk of falling behind on security patches and bug fixes. The GKE team manages the rollout, ensuring a smooth upgrade process.

This is a core Google best practice for maintaining cluster health, security, and supportability over time. The control plane is automatically upgraded by Google, and this feature handles the nodes.

Incorrect Option:

A. Enable the Node Auto-Repair feature for your GKE cluster.
Node Auto-Repair is a crucial feature for node health, as it automatically recreates nodes that become unhealthy. However, it does not update the Kubernetes version running on the nodes. A node can be healthy but still be running an old, unsupported version of Kubernetes.

C. Select the latest available cluster version for your GKE cluster.
This is only a point-in-time solution. The "latest available" version at creation will eventually become outdated. This action does not provide any ongoing mechanism to receive new stable releases, so the cluster will not "always" run a supported version without manual intervention.

D. Select “Container-Optimized OS (cos)” as a node image for your GKE cluster.
Container-Optimized OS is a secure, Google-recommended base OS for GKE nodes. However, the node OS is separate from the Kubernetes version itself. Choosing COS does not automatically update the version of the Kubernetes components (kubelet, kube-proxy) running on the node.

Reference:
Google Cloud Documentation: Auto-upgrading nodes - https://cloud.google.com/kubernetes-engine/docs/concepts/auto-upgrades

This official documentation strongly recommends enabling node auto-upgrades "to ensure your cluster is always running a supported and stable version of Kubernetes." It explains that GKE automatically handles the process of selecting and applying the next suitable version.

You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps. What should you do?


A.

Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.


B.

Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use
the credentials to log in to the instance.


C.

Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the
RDP button in the GCP Console and supply the credentials to log in.






B.
  

Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use
the credentials to log in to the instance.



Summary:
To connect to a Windows Server instance on Compute Engine, you must use the Remote Desktop Protocol (RDP). The most straightforward process involves ensuring you have an RDP client locally, setting the login credentials for the instance via the Google Cloud Console (as you don't have a domain controller), and ensuring the necessary firewall rule for RDP (port 3389) is in place. The default network includes a rule allowing RDP, which simplifies the process.

Correct Option:

B. Install a RDP client on your desktop. Set a Windows username and password in the GCP Console. Use the credentials to log in to the instance.
This is the standard and most direct method. The Google Cloud Console provides a built-in feature to reset a Windows password for a VM instance, which creates the user account and credentials needed for RDP login.

By default, a Compute Engine firewall rule named allow-rdp exists, which opens port 3389 for RDP traffic. This means you typically don't need to create a new rule, fulfilling the "fewest steps" requirement.

You then use any standard RDP client (like the built-in Windows Remote Desktop Connection) with the instance's external IP address and the credentials you just set to connect.

Incorrect Option:

A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.
This process is incomplete. While it covers the client and network access, it misses the most critical step: establishing the username and password needed to authenticate to the Windows instance. Without setting these credentials, you cannot log in.

C. Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in.
This option contains a critical error. Port 22 is used for SSH, which is for Linux instances. The required port for RDP to a Windows instance is 3389. While the GCP Console does have an "RDP" button that launches a browser-based client, verifying the wrong port demonstrates a misunderstanding of the connection protocol.

Reference:
Google Cloud Documentation: Connecting to Windows instances - https://cloud.google.com/compute/docs/instances/connecting-to-windows

This official guide outlines the process, which involves using the Google Cloud Console to set a password and then connect using an RDP client. It confirms that the default allow-rdp firewall rule enables RDP access on port 3389.

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your
requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?


A.

Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host
maintenance’ to Migrate VM instance. Add the instance template to an intsance group.


B.

Create an instance templated for the instances. Set ‘Automatic Restart’ to off. Set ‘On-host
maintenance’ to Terminate VM instances. Add the instance template to an instance group.


C.

Create an instance group for the instances. Set the ‘Autohealing’ health check to healthy (HTTP).


D.

Create an instance group for the instance. Verify that the ‘Advanced creation options’ setting for ‘do not
retry machine creation’ is set to off.





A.
  

Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host
maintenance’ to Migrate VM instance. Add the instance template to an intsance group.



Summary:
The requirements are for 10 instances to be highly available during both crashes and system maintenance. This requires two specific features: "Automatic Restart" to handle crashes and "Live Migration" to handle host maintenance without downtime. An instance template is the correct way to define these settings uniformly for a group of instances, and placing them in a managed instance group ensures they are distributed across multiple hardware hosts for higher availability.

Correct Option:

A. Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host maintenance’ to Migrate VM instance. Add the instance template to an instance group.

Automatic Restart:
When set to On, the system will attempt to automatically restart an instance if it crashes, fulfilling the first requirement.

On-host Maintenance:
When set to Migrate VM instance, the system performs a live migration of the instance to another host during infrastructure maintenance, preventing downtime and ensuring high availability. This fulfills the second requirement.

Instance Group:
Deploying the instances via a group ensures they are spread across multiple underlying hosts, making the application more resilient to zonal failures.

Incorrect Option:

B. Create an instance template for the instances. Set ‘Automatic Restart’ to off. Set ‘On-host maintenance’ to Terminate VM instances.
This configuration fails both requirements. Turning Automatic Restart off means crashed instances will not be recovered. Setting On-host maintenance to Terminate means instances will be shut down during maintenance, causing downtime instead of providing high availability.

C. Create an instance group for the instances. Set the ‘Autohealing’ health check to healthy (HTTP).
Autohealing is a powerful feature of managed instance groups (MIGs) that recreates instances based on an application health check (e.g., HTTP). However, it does not address the requirement for availability during system maintenance. It also does not configure the fundamental instance settings for automatic restart and live migration, which are set at the instance level, not the MIG level.

D. Create an instance group for the instance. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.
This setting in a MIG determines whether the group should retry creating a VM if it fails initially. It is unrelated to the core requirements of automatically restarting crashed VMs or maintaining availability during host maintenance.

Reference:
Google Cloud Documentation: Set availability policies - https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options

This official documentation explains the Automatic restart and On-host maintenance (Live Migration) settings, confirming that these are the correct configurations for maintaining availability during crashes and host maintenance events.

You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do?


A. Use granular logging statements within a Deployment Manager template authored in Python.


B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.


C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.


D.

Execute the Deployment Manager template using the –-preview option in the same project, and observe
the state of interdependent resources.





D.
  

Execute the Deployment Manager template using the –-preview option in the same project, and observe
the state of interdependent resources.



Summary:
The requirement is to validate a complex Deployment Manager template for correct dependencies and syntax before making any actual changes to the project's resources. The fastest and most direct method for this is to use the --preview flag. This command performs a dry run, parsing the template and simulating the creation of resources without actually deploying them. It provides immediate feedback by showing the intended state and will surface errors related to dependencies, syntax, or API availability.

Correct Option:

D. Execute the Deployment Manager template using the –-preview option in the same project, and observe the state of interdependent resources.
The --preview flag is specifically designed for this purpose. It performs a validation and simulation of the deployment.

It checks for syntax errors, verifies that all referenced resources and APIs are available, and displays a plan of the actions it would take (create, update, delete).

This provides the most rapid feedback loop because it runs locally against the live project configuration without incurring the cost or time of actual resource creation, allowing you to identify and fix dependency issues instantly.

Incorrect Option:

A. Use granular logging statements within a Deployment Manager template authored in Python.
While adding logging can be useful for debugging complex logic during an actual deployment, it does not provide a pre-validation step. You would still need to run a deployment (either a preview or a real one) to see these logs, making it a slower feedback method than a dedicated preview.

B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.
This is a reactive method for monitoring a deployment that has already been executed. It does not help you validate the template before committing the changes. By the time you see logs, the deployment is already in progress and may be failing.

C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.
While this would technically validate the template, it is the slowest and most resource-intensive option. It requires maintaining a separate project, incurs the full time and cost of a real deployment, and does not provide feedback as rapidly as a simple, local --preview dry run.

Reference:
Google Cloud Documentation: Creating a deployment preview - https://cloud.google.com/deployment-manager/docs/configuration/create-deployment-previews This official documentation explains the purpose and use of the preview command: "Before you update a deployment, you can create a preview to test your changes. Previews show you what changes will be made to your deployment and what resources will be added, deleted, or modified." This aligns perfectly with the goal of confirming dependencies before committing

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory. What should you do?


A.

Rely on live migration to move the workload to a machine with more memory.


B.

Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB.


C.

Stop the VM, change the machine type to n1-standard-8, and start the VM.


D.

Stop the VM, increase the memory to 8 GB, and start the VM





C.
  

Stop the VM, change the machine type to n1-standard-8, and start the VM.



Summary:
In Google Cloud, a VM's vCPU and memory are defined by its machine type. You cannot independently change just the memory or just the CPU. To change the memory from 4 GB to 8 GB, you must select a new machine type that provides the desired configuration. This requires stopping the VM, changing its machine type to one with 8 GB of memory (like n1-standard-2 which has 2 vCPUs and 8 GB RAM), and then restarting it.

Correct Option:

C. Stop the VM, change the machine type to n1-standard-8, and start the VM.
Machine types in Compute Engine are pre-defined combinations of vCPUs and memory. The n1-standard-8 type provides 8 vCPUs and 30 GB of memory.

Correction:
The question states the VM currently has 2 vCPUs and needs 8 GB of memory. The correct machine type to achieve this is n1-standard-2 (2 vCPUs, 8 GB RAM), not n1-standard-8. However, the fundamental process described in option C—stopping, changing the machine type, and starting—is the correct and only way to change the VM's memory.

The process is: 1) Stop the VM, 2) Edit the VM's configuration to change the machine type to one that fits the requirement (e.g., n1-standard-2), 3) Start the VM.

Incorrect Option:

A. Rely on live migration to move the workload to a machine with more memory.
Live Migration (configured via "On-host maintenance" set to "Migrate VM instance") is for maintaining VM availability during Google's infrastructure maintenance. It does not allow you to upgrade the VM's machine type or memory; the VM is migrated to a new host with an identical configuration.

B. Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB.
VM metadata is used for passing configuration scripts or custom key-value pairs to the operating system. It has no effect on the underlying hardware resources like memory or CPU allocated by the Compute Engine hypervisor.

D. Stop the VM, increase the memory to 8 GB, and start the VM.
This is the incorrect part of the provided answer. In Google Cloud, you cannot independently "increase the memory." The memory is tied to the machine type. The user interface and API do not offer a standalone option to change only the memory. You must select a new machine type that provides the desired amount of memory and vCPUs.

Reference:
Google Cloud Documentation: Changing a machine type - https://cloud.google.com/compute/docs/instances/changing-machine-type-of-stopped-instance

This official documentation explicitly states the correct procedure: "To change the machine type of an instance, the instance must be in the TERMINATED state... After you change the machine type and start the instance, it will run with the new machine type." It confirms that memory is changed by selecting a new machine type..

You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do?


A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.


B. Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.


C. Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.


D. Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.





B.
  Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.

Summary:
Under BigQuery's on-demand pricing model, you are charged based on the number of bytes read by your query, not the number of bytes returned or the number of rows processed. The most accurate way to estimate this cost before execution is to use the dry run feature. This feature validates the query's syntax and structure and returns an estimate of the data it would need to read, without actually running the query or incurring charges.

Correct Option:

B. Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.
A dry run is specifically designed for this purpose. Using the bq command with the --dry_run flag will process the query to determine how much data it would scan and return the number of bytes read, but it will not execute the query or charge you.

Once you have the byte estimate, you can multiply it by the current on-demand pricing (e.g., $6.25 per TiB) to calculate the approximate cost. This provides a precise and reliable cost estimate before running the actual query.

Incorrect Option:

A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.
This is impractical and unnecessary. Switching to Flat-Rate pricing involves a 60-second commitment for queries and is a billing model change for the entire project, not for a single query. It is not a cost-estimation tool and is far more complex than using a dry run.

C. Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.
This is incorrect because BigQuery on-demand pricing is based on the amount of data read (scanned), not the amount of data returned (the result set). A query that reads 1 TB of data but returns only 1 KB will still cost the same as a query that returns 1 GB from that 1 TB scan. The dry_run command correctly reports bytes read, not bytes returned.

D. Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.
This is an inaccurate and costly method. A COUNT(*) query itself will scan the entire table and incur a charge. Furthermore, cost is not based on the number of rows but on the volume of data in those rows. A table with a billion small rows costs less to scan than a table with a million very wide rows. This method provides no reliable way to convert row count to bytes read for pricing.

Reference:
Google Cloud Documentation: Controlling costs in BigQuery - https://cloud.google.com/bigquery/docs/best-practices-costs#estimate-query-costs-using-a-dry-run This official documentation explicitly recommends: "To estimate the number of bytes read by a query, create a dry run of the query by using the --dry_run flag." It confirms this is the best practice for estimating query costs before execution.

Run a select count (*) to get an idea of how many records your query will look through. Then convert
that number of rows to dollars using the Pricing Calculator.


A.

Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.


B.

Use the command line to run a dry run query to estimate the number of bytes read. Then convert that
bytes estimate to dollars using the Pricing Calculator.


C.

Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.


D.

Run a select count (*) to get an idea of how many records your query will look through. Then convert
that number of rows to dollars using the Pricing Calculator.





B.
  

Use the command line to run a dry run query to estimate the number of bytes read. Then convert that
bytes estimate to dollars using the Pricing Calculator.



Summary:
Under BigQuery's on-demand pricing model, you are charged based on the number of bytes read during query execution, not the number of rows processed or the size of the result set. The most accurate and cost-effective way to estimate this cost before running the query is to use the dry run feature. This feature analyzes the query and returns an estimate of the data it would need to read, without actually executing it or incurring any charges.

Correct Option:

B. Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.
A dry run is specifically designed for cost estimation. Using the bq command with the --dry_run flag will process the query to determine how much data it would scan and return the number of bytes read, but it will not execute the query or charge you.

Once you have the byte estimate, you can multiply it by the current on-demand pricing (e.g., $6.25 per TiB) to calculate the approximate cost. This provides a precise and reliable cost estimate before running the actual query.

Incorrect Option:

A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.
This is impractical and unnecessary. Switching to Flat-Rate pricing involves a commitment and is a billing model change for the entire project, not for a single query. It is not a cost-estimation tool and is far more complex than using a dry run.

C. Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.
This is incorrect because BigQuery on-demand pricing is based on the amount of data read (scanned), not the amount of data returned (the result set). A query that reads 1 TB of data but returns only 1 KB will still cost the same as a query that returns 1 GB from that 1 TB scan. The dry_run command correctly reports bytes read, not bytes returned.

D. Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.
This is an inaccurate and costly method. A COUNT(*) query itself will scan the entire table and incur a charge. Furthermore, cost is not based on the number of rows but on the volume of data in those rows. A table with a billion small rows costs less to scan than a table with a million very wide rows. This method provides no reliable way to convert row count to bytes read for pricing.

Reference:
Google Cloud Documentation: Controlling costs in BigQuery - https://cloud.google.com/bigquery/docs/best-practices-costs#estimate-query-costs-using-a-dry-run This official documentation explicitly recommends: "To estimate the number of bytes read by a query, create a dry run of the query by using the --dry_run flag." It confirms this is the best practice for estimating query costs before execution.

You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group?


A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.


B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.


C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.


D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2





B.
  Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.

Summary:
The requirement is to have exactly one VM running at all times, with no scaling. A managed instance group (MIG) is the correct tool to ensure an application is running and to automatically recreate the VM if it fails. To guarantee that only a single instance exists, you must configure the MIG for a fixed size of 1 by turning autoscaling off. This ensures the group will actively maintain one, and only one, healthy instance.

Correct Option:

B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
Autoscaling to Off: This disables any automatic scaling based on metrics like CPU or load balancer capacity. The group will not create or delete instances based on demand.

Minimum and Maximum instances to 1: This configures the MIG to have a fixed size of one instance. The MIG's health checking and auto-healing features will ensure this single instance is automatically repaired or recreated if it crashes, becomes unresponsive, or is terminated, meeting the "running at all times" requirement.

Incorrect Option:

A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
While this also results in one instance, it is an inefficient use of autoscaling. Enabling an autoscaler to maintain a fixed size adds unnecessary overhead. The autoscaler would continuously check metrics only to conclude that no scaling action is needed. The correct practice for a fixed size is to turn autoscaling off.

C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
This configuration violates the core requirement of "only a single instance." If the defined scaling metric (e.g., CPU) is exceeded, the autoscaler will create a second instance, resulting in two VMs running in the project.

D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
This configuration is illogical and not standard. When autoscaling is off, the MIG maintains a fixed size, which is determined by the "Target size" or "Number of instances" setting. The "minimum" and "maximum" fields are relevant only when autoscaling is enabled. This configuration would be ambiguous and not reliably maintain a single instance.

Reference:
Google Cloud Documentation: Setting a fixed number of instances - https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances#fixed_size This official documentation explains how to create a managed instance group with a fixed size, which is the recommended way to maintain a static number of instances (like one) and leverage auto-healing.

You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?


A. Deploy the monitoring pod in a StatefulSet object.


B. Deploy the monitoring pod in a DaemonSet object.


C. Reference the monitoring pod in a Deployment object


D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time





B.
  Deploy the monitoring pod in a DaemonSet object.

Summary:
The requirement is to run a specific pod on every node in the cluster, including nodes that are automatically added by the cluster autoscaler. A DaemonSet is the standard Kubernetes object designed precisely for this use case. It ensures that a copy of a pod is running on all (or a subset of) nodes in the cluster. When the cluster autoscaler adds a new node, the DaemonSet controller automatically deploys the monitoring pod to it.

Correct Option:

B. Deploy the monitoring pod in a DaemonSet object.
A DaemonSet is a Kubernetes controller that guarantees a copy of a pod runs on every node in the cluster.

This is ideal for cluster-wide services like monitoring agents, log collectors, or storage daemons that need to run on every single node to function correctly.

When the cluster autoscaler adds a new node, the DaemonSet controller detects the new node and immediately schedules the monitoring pod onto it, ensuring complete coverage without any manual intervention.

Incorrect Option:

A. Deploy the monitoring pod in a StatefulSet object.
A StatefulSet is used for stateful applications that require stable, unique network identifiers and stable, persistent storage (e.g., databases like MySQL or Kafka). It is not concerned with ensuring a pod runs on every node and does not automatically scale to new nodes added by the autoscaler.

C. Reference the monitoring pod in a Deployment object.
A Deployment is used for managing a set of stateless, replicated pods. You specify the number of replicas, and the scheduler decides where to place them. There is no guarantee that a pod will be placed on every node, and it will not automatically scale to new nodes unless you manually increase the replica count.

D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.
While a cluster initializer (or a cluster startup script) could run a pod on the initial set of nodes, it is a one-time execution. It would not automatically deploy the monitoring pod to new nodes that are later added by the cluster autoscaler, leading to incomplete monitoring coverage as the cluster scales.

Reference:
Kubernetes Documentation: DaemonSet - https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

This official Kubernetes documentation states: "A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected." This is the definitive feature that solves the problem.

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do?


A. Create a health check on port 443 and use that when creating the Managed Instance Group.


B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.


C. In the Instance Template, add the label ‘health-check’.


D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server





A.
  Create a health check on port 443 and use that when creating the Managed Instance Group.

Summary:
A managed instance group (MIG) can automatically recreate unhealthy VMs through a feature called autohealing. For this to work, the MIG must be configured with a health check that can determine the application's health. For an HTTPS web application, you need to create a health check that probes port 443 (the standard port for HTTPS). The MIG will then use the results of this health check to automatically repair (recreate) instances that are deemed unhealthy.

Correct Option:

A. Create a health check on port 443 and use that when creating the Managed Instance Group.
Autohealing is triggered by a health check that you define and attach to the MIG. This health check periodically probes your application.

Creating a health check that targets port 443 (HTTPS) allows it to correctly assess the status of your web application.

When the MIG detects that an instance has failed this health check for a consecutive number of times, it will automatically delete the unhealthy VM and create a new one to maintain the desired capacity, fulfilling the requirement to recreate unhealthy VMs.

Incorrect Option:

B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
This configuration provides high availability by distributing VMs across multiple zones, which protects against zonal failures. However, it does not automatically detect and repair a VM that is running but has a failed application. A VM in a failed state can exist in any zone unless a health check is used to trigger autohealing.

C. In the Instance Template, add the label ‘health-check’.
Labels are metadata used for filtering and grouping resources. They are purely organizational and have no functional impact. Adding a label called 'health-check' does not configure any actual monitoring or healing behavior for the MIG.

D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server.
A startup script runs only once when a VM is booting to perform initial setup. It cannot be used for ongoing health monitoring. Furthermore, the metadata server is for retrieving instance information and cannot process or act upon application health "heartbeats." The mechanism for health reporting is the defined health check that probes the application from the outside.

Reference:
Google Cloud Documentation: Health checks and autohealing - https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs

This official documentation states: "You can configure a managed instance group to automatically recreate an instance in the group when the instance is deemed unhealthy... To use autohealing, you must configure a health check for the instance group." It confirms that a health check is the mandatory component for this functionality.


Page 1 out of 10 Pages