SOA-C02 Practice Test Questions

486 Questions


Topic 1: Mix Questions

A company hosts a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 to route traffic. The company also has a static website that is configured in an Amazon S3 bucket. A SysOps administrator must use the static website as a backup to the web application. The failover to the static website must be fully automated. Which combination of actions will meet these requirements? (Choose two.)


A. Create a primary failover routing policy record. Configure the value to be the ALB.


B. Create an AWS Lambda function to switch from the primary website to the secondary website when the health check fails.


C. Create a primary failover routing policy record. Configure the value to be the ALB. Associate the record with a Route 53 health check.


D. Create a secondary failover routing policy record. Configure the value to be the static website. Associate the record with a Route 53 health check.


E. Create a secondary failover routing policy record. Configure the value to be the static website.





A.
  Create a primary failover routing policy record. Configure the value to be the ALB.

C.
  Create a primary failover routing policy record. Configure the value to be the ALB. Associate the record with a Route 53 health check.

Explanation: To use the static website as a backup to the web application and ensure automated failover, the SysOps administrator should set up failover routing policies in Amazon Route 53.

A company's SysOps administrator attempts to restore an Amazon Elastic Block Store (Amazon EBS) snapshot. However, the snapshot is missing because another system administrator accidentally deleted the snapshot. The company needs the ability to recover snapshots for a specified period of time after snapshots are deleted. Which solution will provide this functionality?


A. Turn on deletion protection on individual EBS snapshots that need to be kept.


B. Create an 1AM policy that denies the deletion of EBS snapshots by using a condition statement for the snapshot age Apply the policy to all users


C. Create a Recycle Bin retention rule for EBS snapshots for the desired retention period.


D. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function to copy EBS snapshots to Amazon S3 Glacier.





C.
  Create a Recycle Bin retention rule for EBS snapshots for the desired retention period.

An environment consists of 100 Amazon EC2 Windows instances The Amazon CloudWatch agent Is deployed and running on at EC2 instances with a baseline configuration file to capture log files There is a new requirement to capture the DHCP tog tiles that exist on 50 of the instances What is the MOST operational efficient way to meet this new requirement?


A. Create an additional CloudWatch agent configuration file to capture the DHCP logs Use the AWS Systems Manager Run Command to restart the CloudWatch agent on each EC2 instance with the append-config option to apply the additional configuration file


B. Log in to each EC2 instance with administrator rights Create a PowerShell script to push the needed baseline log files and DHCP log files to CloudWatch


C. Run the CloudWatch agent configuration file wizard on each EC2 instance Verify that the base the log files are included and add the DHCP tog files during the wizard creation process


D. Run the CloudWatch agent configuration file wizard on each EC2 instance and select the advanced detail level. This wifi capture the operating system log files.





A.
  Create an additional CloudWatch agent configuration file to capture the DHCP logs Use the AWS Systems Manager Run Command to restart the CloudWatch agent on each EC2 instance with the append-config option to apply the additional configuration file

Explanation: The most operationally efficient way to capture DHCP log files on 50 of the 100 EC2 instances is to create an additional CloudWatch agent configuration file and use AWS Systems Manager Run Command to update the CloudWatch agent configuration.

A SysOps administrator manages the caching of an Amazon CloudFront distribution that serves pages of a website. The SysOps administrator needs to configure the distribution so that the TTL of individual pages can vary. The TTL of the individual pages must remain within the maximum TTL and the minimum TTL that are set for the distribution. Which solution will meet these requirements?


A. Create an AWS Lambda function that calls the Create Invalid at ion API operation when a change in cache time is necessary.


B. Add a Cache-Control: max-age directive to the object at the origin when content is being returned to CloudFront.


C. Add a no-cache header through a Lambda@Edge function in response to the Viewer response.


D. Add an Expires header through a CloudFront function in response to the Viewer response.





B.
  Add a Cache-Control: max-age directive to the object at the origin when content is being returned to CloudFront.

Explanation: To allow the TTL (Time to Live) of individual pages to vary while adhering to the maximum and minimum TTL settings configured for the Amazon CloudFront distribution, setting cache behaviors directly at the origin is most effective:

  • Use Cache-Control Headers: By configuring the Cache-Control: max-age directive in the HTTP headers of the objects served from the origin, you can specify how long an object should be cached by CloudFront before it is considered stale.
  • Integration with CloudFront: When CloudFront receives a request for an object, it checks the cache-control header to determine the TTL for that specific object. This allows individual objects to have their own TTL settings, as long as they are within the globally set minimum and maximum TTL values for the distribution.
  • Operational Efficiency: This method does not require any additional AWS services or modifications to the distribution settings. It leverages HTTP standard practices, ensuring compatibility and ease of management.
Implementing the TTL management through cache-control headers at the origin provides precise control over caching behavior, aligning with varying content freshness requirements without complex configurations.

A SysOps administrator has created a VPC that contains a public subnet and a private subnet. Amazon EC2 instances that were launched in the private subnet cannot access the internet. The default network ACL is active on all subnets in the VPC, and all security groups allow all outbound traffic: Which solution will provide the EC2 instances in the private subnet with access to the internet?


A. Create a NAT gateway in the public subnet. Create a route from the private subnet to the NAT gateway.


B. Create a NAT gateway in the public subnet. Create a route from the public subnet to the NAT gateway.


C. Create a NAT gateway in the private subnet. Create a route from the public subnet to the NAT gateway.


D. Create a NAT gateway in the private subnet. Create a route from the private subnet to the NAT gateway.





A.
  Create a NAT gateway in the public subnet. Create a route from the private subnet to the NAT gateway.

Explanation: NAT Gateway resides in public subnet, and traffic should be routed from private subnet to NAT Gateway: https://docs.aws.amazon.com/vpc/latest/userguide/vpcnat- gateway.html

A company is using Amazon Elastic Container Sen/ice (Amazon ECS) to run a containerized application on Amazon EC2 instances. A SysOps administrator needs to monitor only traffic flows between the ECS tasks. Which combination of steps should the SysOps administrator take to meet this requirement? (Select TWO.)


A. Configure Amazon CloudWatch Logs on the elastic network interface of each task.


B. Configure VPC Flow Logs on the elastic network interface of each task.


C. Specify the awsvpc network mode in the task definition.


D. Specify the bridge network mode in the task definition.


E. Specify the host network mode in the task definition.





B.
  Configure VPC Flow Logs on the elastic network interface of each task.

C.
  Specify the awsvpc network mode in the task definition.

A company deployed a new web application on multiple Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group. Users report that they are frequently being prompted to log in. What should a SysOps administrator do to resolve this issue?


A. Configure an Amazon CloudFront distribution with the ALB as the origin.


B. Enable sticky sessions (session affinity) for the target group of EC2 instances.


C. Redeploy the EC2 instances in a spread placement group.


D. Replace the ALB with a Network Load Balancer.





C.
  Redeploy the EC2 instances in a spread placement group.

Explanation:
To resolve the issue of users being frequently prompted to log in, which typically indicates that session persistence is not configured:

  • Sticky Sessions: Enable sticky sessions (session affinity) on the ALB's target group. This configuration makes sure that all requests from a single user during a session are directed to the same EC2 instance, rather than being load balanced to different instances which might not share session data.
  • Configuration: This is done in the ALB settings under the target group attributes. Sticky sessions use a user-specific cookie generated by the ALB to route requests to the designated instance.
  • Session Cookie: The ALB handles the session cookie automatically, but you may adjust settings like the duration that the session cookie remains valid.
Enabling sticky sessions ensures that user sessions are maintained with the same server, reducing the instances of repeated logins and improving the user experience.

A company is running Amazon RDS for PostgreSOL Multi-AZ DB clusters. The company uses an AWS Cloud Formation template to create the databases individually with a default size of 100 GB. The company creates the databases every Monday and deletes the databases every Friday. Occasionally, the databases run low on disk space and initiate an Amazon CloudWatch alarm. A SysOps administrator must prevent the databases from running low on disk space in the future. Which solution will meet these requirements with the FEWEST changes to the application?


A. Modify the CloudFormation template to use Amazon Aurora PostgreSOL as the DB engine.


B. Modify the CloudFormation template to use Amazon DynamoDB as the database. Activate storage auto scaling during creation of the tables


C. Modify the Cloud Formation template to activate storage auto scaling on the existing DB instances.


D. Create a CloudWatch alarm to monitor DB instance storage space. Configure the alarm to invoke the VACUUM command.





C.
  Modify the Cloud Formation template to activate storage auto scaling on the existing DB instances.

Explanation:
To prevent Amazon RDS for PostgreSQL Multi-AZ DB instances from running low on disk space, enabling storage auto-scaling is the most straightforward solution. This feature automatically adjusts the storage capacity of the DB instance when it approaches its limit, thus preventing the database from running out of space and triggering CloudWatch alarms.
Option C is the least intrusive and most effective solution as it only requires a modification to the existing CloudFormation template to enable auto-scaling on storage. For reference, see AWS documentation on managing RDS storage automatically Managing RDS Storage Automatically.

The SysOps administrator needs to create a key policy that grants data engineers least privilege access to decrypt and read data from an S3 bucket encrypted with KMS.


A. "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:Encrypt", "kms:DescribeKey"


B. "kms:ListAliases", "kms:GetKeyPolicy", "kms:Describe*", "kms:Decrypt"


C. "kms:ListAliases", "kms:DescribeKey", "kms:Decrypt"


D. "kms:Update*", "kms:TagResource", "kms:Revoke*", "kms:Put*", "kms:List*", "kms:Get*", "kms:Enable*", "kms:Disable*", "kms:Describe*", "kms:Delete*", "kms:Create*", kms:CancelKeyDeletion





C.
  "kms:ListAliases", "kms:DescribeKey", "kms:Decrypt"

Explanation: The least privilege required for reading encrypted data involves kms:Decrypt to decrypt, kms:DescribeKey to understand key properties, and kms:ListAliases if needed to identify the key alias.

The company requires a disaster recovery solution for an Aurora PostgreSQL database with a 20-second RPO.


A. Reconfigure the database to be an Aurora global database. Set the RPO to 20 seconds.


B. B. Reconfigure the database to be an Aurora Serverless v2 database with an Aurora Replica in a separate Availability Zone. Set the replica lag to 20 seconds.


C. Modify the database to use a Multi-AZ cluster that has two readable standby instances in separate Availability Zones. Add an Aurora Replica in a separate Availability Zone. Set the replica lag to 20 seconds.





A.
  Reconfigure the database to be an Aurora global database. Set the RPO to 20 seconds.

Explanation: Aurora Global Databases are designed for cross-Region disaster recovery with very low RPO, meeting the 20-second requirement. Setting up Aurora as a global database with the correct configuration ensures low-latency replication and rapid failover, making it ideal for compliance with strict disaster recovery requirements.

A company needs to upload gigabytes of files every day. The company need to achieve higher throughput and upload speeds to Amazon S3 Which action should a SysOps administrator take to meet this requirement?


A. Create an Amazon CloudFront distribution with the GET HTTP method allowed and the S3 bucket as an origin.


B. Create an Amazon ElastiCache duster and enable caching for the S3 bucket


C. Set up AWS Global Accelerator and configure it with the S3 bucket


D. Enable S3 Transfer Acceleration and use the acceleration endpoint when uploading files





D.
  Enable S3 Transfer Acceleration and use the acceleration endpoint when uploading files

Explanation: Enable Amazon S3 Transfer Acceleration Amazon S3 Transfer Acceleration can provide fast and secure transfers over long distances between your client and Amazon S3. Transfer Acceleration uses Amazon CloudFront's globally distributed edge locations.

To manage Auto Scaling group instances that have OS vulnerabilities, the SysOps administrator needs an automated patching solution.


A. Use AWS Systems Manager Patch Manager to patch the instances during a scheduled maintenance window. In the AWS-RunPatchBaseline document, ensure that the RebootOption parameter is set to RebootIfNeeded.


B. Use EC2 Image Builder pipelines on a schedule to create new Amazon Machine Images (AMIs) and new launch templates that reference the new AMIs. Use the instance refresh feature for EC2 Auto Scaling to replace instances.


C. Use AWS Config to scan for operating system vulnerabilities and to patch instances when the instance status changes to NON_COMPLIANT. Send an Amazon Simple Notification Service (Amazon SNS) notification to an operations team to reboot the instances during off-peak hours.


D. In the Auto Scaling launch template, provide an Amazon Machine Image (AMI) ID for an AWS-provided base image. Update the user data with a shell script to download and install patches.





A.
  Use AWS Systems Manager Patch Manager to patch the instances during a scheduled maintenance window. In the AWS-RunPatchBaseline document, ensure that the RebootOption parameter is set to RebootIfNeeded.

Explanation: Using AWS Systems Manager Patch Manager with a maintenance window is a best practice for automating OS patch management across instances in an Auto Scaling group.

  • Patch Manager: Allows for scheduled patching according to maintenance windows, ensuring minimal impact on application uptime.
  • RebootOption parameter: Setting this to RebootIfNeeded ensures patches are applied fully when a reboot is required.
  • AWS-RunPatchBaseline: This document automates the patching process and can be customized based on compliance requirements.


Page 19 out of 41 Pages
Previous