Topic 2: Exam Pool B
A company is running an application in the AWS Cloud. The core business logic is running
on a set of Amazon EC2 instances in an Auto Scaling group. An Application Load Balancer
(ALB) distributes traffic to the EC2 instances. Amazon Route 53 record api.example.com is
pointing to the ALB.
The company's development team makes major updates to the business logic. The
company has a rule that when changes are deployed, only 10% of customers can receive
the new logic during a testing window. A customer must use the same version of the
business logic during the testing window.
How should the company deploy the updates to meet these requirements?
A. Create a second ALB, and deploy the new logic to a set of EC2 instances in a new Auto Scaling group. Configure the ALB to distribute traffic to the EC2 instances. Update the Route 53 record to use weighted routing, and point the record to both of the ALBs.
B. Create a second target group that is referenced by the ALB. Deploy the new logic to EC2 instances in this new target group. Update the ALB listener rule to use weighted target groups. Configure ALB target group stickiness.
C. Create a new launch configuration for the Auto Scaling group. Specify the launch configuration to use the AutoScaIingRoIIingUpdate policy, and set the MaxBatchSize option to 10. Replace the launch configuration on the Auto Scaling group. Deploy the changes.
D. Create a second Auto Scaling group that is referenced by the ALB. Deploy the new logic on a set of EC2 instances in this new Auto Scaling group. Change the ALB routing algorithm to least outstanding requests (LOR). Configure ALB session stickiness.
A company's public API runs as tasks on Amazon Elastic Container Service (Amazon
ECS). The tasks run on AWS Fargate behind an Application Load Balancer (ALB) and are
configured with Service Auto Scaling for the tasks based on CPU utilization. This service
has been running well for several months.
Recently, API performance slowed down and made the application unusable. The company
discovered that a significant number of SQL injection attacks had occurred against the API
and that the API service had scaled to its maximum amount.
A solutions architect needs to implement a solution that prevents SQL injection attacks
from reaching the ECS API service. The solution must allow legitimate traffic through and
must maximize operational efficiency.
Which solution meets these requirements?
A. Create a new AWS WAF web ACL to monitor the HTTP requests and HTTPS requests that are forwarded to the ALB in front of the ECS tasks.
B. Create a new AWS WAF Bot Control implementation. Add a rule in the AWS WAF Bot Control managed rule group to monitor traffic and allow only legitimate traffic to the ALB in front of the ECS tasks.
C. Create a new AWS WAF web ACL. Add a new rule that blocks requests that match the SQL database rule group. Set the web ACL to allow all other traffic that does not match those rules. Attach the web ACL to the ALB in front of the ECS tasks.
D. Create a new AWS WAF web ACL. Create a new empty IP set in AWS WAF. Add a new rule to the web ACL to block requests that originate from IP addresses in the new IP set. Create an AWS Lambda function that scrapes the API logs for IP addresses that send SQL injection attacks, and add those IP addresses to the IP set. Attach the web ACL to the ALB in front of the ECS tasks.
Explanation:
The company should create a new AWS WAF web ACL. The company should add a new
rule that blocks requests that match the SQL database rule group. The company should set
the web ACL to allow all other traffic that does not match those rules. The company should
attach the web ACL to the ALB in front of the ECS tasks. This solution will meet the
requirements because AWS WAF is a web application firewall that lets you monitor and
control web requests that are forwarded to your web applications. You can use AWS WAF
to define customizable web security rules that control which traffic can access your web
applications and which traffic should be blocked1. By creating a new AWS WAF web ACL,
the company can create a collection of rules that define the conditions for allowing or
blocking web requests. By adding a new rule that blocks requests that match the SQL
database rule group, the company can prevent SQL injection attacks from reaching the
ECS API service. The SQL database rule group is a managed rule group provided by AWS
that contains rules to protect against common SQL injection attack patterns2. By setting the
web ACL to allow all other traffic that does not match those rules, the company can ensure
that legitimate traffic can access the API service. By attaching the web ACL to the ALB in
front of the ECS tasks, the company can apply the web security rules to all requests that
are forwarded by the load balancer.
The other options are not correct because:
Creating a new AWS WAF Bot Control implementation would not prevent SQL
injection attacks from reaching the ECS API service. AWS WAF Bot Control is a
feature that gives you visibility and control over common and pervasive bot traffic
that can consume excess resources, skew metrics, cause downtime, or perform
other undesired activities. However, it does not protect against SQL injection
attacks, which are malicious attempts to execute unauthorized SQL statements
against your database3.
Creating a new AWS WAF web ACL to monitor the HTTP requests and HTTPS
requests that are forwarded to the ALB in front of the ECS tasks would not prevent
SQL injection attacks from reaching the ECS API service. Monitoring mode is a
feature that enables you to evaluate how your rules would perform without actually
blocking any requests. However, this mode does not provide any protection
against attacks, as it only logs and counts requests that match your rules4.
Creating a new AWS WAF web ACL and creating a new empty IP set in AWS
WAF would not prevent SQL injection attacks from reaching the ECS API service.
An IP set is a feature that enables you to specify a list of IP addresses or CIDR
blocks that you want to allow or block based on their source IP address. However,
this approach would not be effective or efficient against SQL injection attacks, as it
would require constantly updating the IP set with new IP addresses of attackers,
and it would not block attackers who use proxies or VPNs.
A company is storing sensitive data in an Amazon S3 bucket. The company must log all
activities for objects in the S3 bucket and must keep the logs for 5 years. The company's
security team also must receive an email notification every time there is an attempt to
delete data in the S3 bucket.
Which combination of steps will meet these requirements MOST cost-effectively? (Select
THREE.)
A. Configure AWS CloudTrail to log S3 data events.
B. Configure S3 server access logging for the S3 bucket.
C. Configure Amazon S3 to send object deletion events to Amazon Simple Email Service (Amazon SES).
D. Configure Amazon S3 to send object deletion events to an Amazon EventBridge event bus that publishes to an Amazon Simple Notification Service (Amazon SNS) topic.
E. Configure Amazon S3 to send the logs to Amazon Timestream with data storage tiering.
F. Configure a new S3 bucket to store the logs with an S3 Lifecycle policy.
Explanation: Configuring AWS CloudTrail to log S3 data events will enable logging all activities for objects in the S3 bucket1. Data events are object-level API operations such as GetObject, DeleteObject, and PutObject1. Configuring Amazon S3 to send object deletion events to an Amazon EventBridge event bus that publishes to an Amazon Simple Notification Service (Amazon SNS) topic will enable sending email notifications every time there is an attempt to delete data in the S3 bucket2. EventBridge can route events from S3 to SNS, which can send emails to subscribers2. Configuring a new S3 bucket to store the logs with an S3 Lifecycle policy will enable keeping the logs for 5 years in a cost-effective way3. A lifecycle policy can transition the logs to a cheaper storage class such as Glacier or delete them after a specified period of time3.
A company has a few AWS accounts for development and wants to move its production
application to AWS. The company needs to enforce Amazon Elastic Block Store (Amazon
EBS) encryption at rest current production accounts and future production accounts only.
The company needs a solution that includes built-in blueprints and guardrails.
Which combination of steps will meet these requirements? (Choose three.)
A. Use AWS CloudFormation StackSets to deploy AWS Config rules on production accounts.
B. Create a new AWS Control Tower landing zone in an existing developer account. Create OUs for accounts. Add production and development accounts to production and development OUs, respectively.
C. Create a new AWS Control Tower landing zone in the company’s management account. Add production and development accounts to production and development OUs. respectively.
D. Invite existing accounts to join the organization in AWS Organizations. Create SCPs to ensure compliance.
E. Create a guardrail from the management account to detect EBS encryption.
F. Create a guardrail for the production OU to detect EBS encryption.
A company wants to containerize a multi-tier web application and move the application
from an on-premises data center to AWS. The application includes web. application, and
database tiers. The company needs to make the application fault tolerant and scalable.
Some frequently accessed data must always be available across application servers.
Frontend web servers need session persistence and must scale to meet increases in
traffic.
Which solution will meet these requirements with the LEAST ongoing operational
overhead?
A. Run the application on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Elastic File System (Amazon EFS) for data that is frequently accessed between the web and application tiers. Store the frontend web server session data in Amazon Simple Queue Service (Amazon SOS).
B. Run the application on Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use Amazon ElastiCache for Redis to cache frontend web server session data. Use Amazon Elastic Block Store (Amazon EBS) with Multi-Attach on EC2 instances that are distributed across multiple Availability Zones.
C. Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Configure Amazon EKS to use managed node groups. Use ReplicaSets to run the web servers and applications. Create an Amazon Elastic File System (Amazon EFS) Me system. Mount the EFS file system across all EKS pods to store frontend web server session data.
D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) Configure Amazon EKS to use managed node groups. Run the web servers and application as Kubernetes deployments in the EKS cluster. Store the frontend web server session data in an Amazon DynamoDB table. Create an Amazon Elastic File System (Amazon EFS) volume that all applications will mount at the time of deployment.
Explanation: Deploying the application on Amazon EKS with managed node groups simplifies the operational overhead of managing the Kubernetes cluster. Running the web servers and application as Kubernetes deployments ensures that the desired number of pods are always running and can scale up or down as needed. Storing the frontend web server session data in an Amazon DynamoDB table provides a fast, scalable, and durable storage option that can be accessed across multiple Availability Zones. Creating an Amazon EFS volume that all applications will mount at the time of deployment allows the application to share data that is frequently accessed between the web and application tiers.
A company needs to build a disaster recovery (DR) solution for its ecommerce website.
The web application is hosted on a fleet of t3.Iarge Amazon EC2 instances and uses an
Amazon RDS for MySQL DB instance. The EC2 instances are in an Auto Scaling group
that extends across multiple Availability Zones.
In the event of a disaster, the web application must fail over to the secondary environment
with an RPO of 30 seconds and an R TO of 10 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create a cross-Region read replica for the DB instance. Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instance. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Region. Recover the EC2 instances from the latest EC2 backup. Use an Amazon Route 53 geolocation routing policy to automatically fail over to the DR Region in the event of a disaster.
B. Use infrastructure as code (laC) to provision the new infrastructure in the DR Region. Create a cross-Region read replica for the DB instance. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Region. Run the EC2 instances at the minimum capacity in the DR Region Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster. Increase the desired capacity of the Auto Scaling group.
C. Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instance. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Region. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Manually restore the backed-up data on new instances. Use an Amazon Route 53 simple routing policy to automatically fail over to the DR Region in the event of a disaster.
D. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. Create an Amazon Aurora global database. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Region. Run the Auto Scaling group of EC2 instances at full capacity in the DR Region. Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster.
A company runs an application on AWS. The company curates data from several different
sources. The company uses proprietary algorithms to perform data transformations and
aggregations. After the company performs E TL processes, the company stores the results
in Amazon Redshift tables. The company sells this data to other companies. The company
downloads the data as files from the Amazon Redshift tables and transmits the files to
several data customers by using FTP. The number of data customers has grown
significantly. Management of the data customers has become difficult.
The company will use AWS Data Exchange to create a data product that the company can
use to share data with customers. The company wants to confirm the identities of the
customers before the company shares data. The customers also need access to the most
recent data when the company publishes the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Data Exchange for APIs to share data with customers. Configure subscription verification. In the AWS account of the company that produces the data, create an Amazon API Gateway Data API service integration with Amazon Redshift. Require the data customers to subscribe to the data product.
B. In the AWS account of the company that produces the data, create an AWS Data Exchange datashare by connecting AWS Data Exchange to the Redshift cluster. Configure subscription verification. Require the data customers to subscribe to the data product.
C. Download the data from the Amazon Redshift tables to an Amazon S3 bucket periodically. Use AWS Data Exchange for S3 to share data with customers. Configure subscription verification. Require the data customers to subscribe to the data product.
D. Publish the Amazon Redshift data to an Open Data on AWS Data Exchange. Require the customers to subscribe to the data product in AWS Data Exchange. In the AWS account of the company that produces the data, attach 1AM resource-based policies to the Amazon Redshift tables to allow access only to verified AWS accounts.
Explanation:
The company should download the data from the Amazon Redshift tables to an Amazon
S3 bucket periodically and use AWS Data Exchange for S3 to share data with customers.
The company should configure subscription verification and require the data customers to
subscribe to the data product. This solution will meet the requirements with the least
operational overhead because AWS Data Exchange for S3 is a feature that enables data
subscribers to access third-party data files directly from data providers’ Amazon S3
buckets. Subscribers can easily use these files for their data analysis with AWS services
without needing to create or manage data copies. Data providers can easily set up AWS
Data Exchange for S3 on top of their existing S3 buckets to share direct access to an entire
S3 bucket or specific prefixes and S3 objects. AWS Data Exchange automatically manages
subscriptions, entitlements, billing, and payment1.
The other options are not correct because:
Using AWS Data Exchange for APIs to share data with customers would not work
because AWS Data Exchange for APIs is a feature that enables data subscribers
to access third-party APIs directly from data providers’ AWS accounts. Subscribers
can easily use these APIs for their data analysis with AWS services without
needing to manage API keys or tokens. Data providers can easily set up AWS
Data Exchange for APIs on top of their existing API Gateway resources to share
direct access to an entire API or specific routes and stages2. However, this feature
is not suitable for sharing data from Amazon Redshift tables, which are not
exposed as APIs.
Creating an Amazon API Gateway Data API service integration with Amazon
Redshift would not work because the Data API is a feature that enables you to
query your Amazon Redshift cluster using HTTP requests, without needing a
persistent connection or a SQL client3. It is useful for building applications that
interact with Amazon Redshift, but not for sharing data files with customers.
Creating an AWS Data Exchange datashare by connecting AWS Data Exchange
to the Redshift cluster would not work because AWS Data Exchange does not
support datashares for Amazon Redshift clusters. A datashare is a feature that
enables you to share live and secure access to your Amazon Redshift data across
your accounts or with third parties without copying or moving the underlying data4.
It is useful for sharing query results and views with other users, but not for sharing
data files with customers.
Publishing the Amazon Redshift data to an Open Data on AWS Data Exchange
would not work because Open Data on AWS Data Exchange is a feature that
enables you to find and use free and public datasets from AWS customers and
partners. It is useful for accessing open and free data, but not for confirming the
identities of the customers or charging them for the data.
A company has VPC flow logs enabled for its NAT gateway. The company is seeing Action
= ACCEPT for inbound traffic that comes from public IP address
198.51.100.2 destined for a private Amazon EC2 instance.
A solutions architect must determine whether the traffic represents unsolicited inbound
connections from the internet. The first two octets of the VPC CIDR block are 203.0.
Which set of steps should the solutions architect take to meet these requirements?
A. Open the AWS CloudTrail console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 203.0" and the source address set as "like 198.51.100.2". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.
B. Open the Amazon CloudWatch console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 203.0" and the source address set as "like 198.51.100.2". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.
C. Open the AWS CloudTrail console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 198.51.100.2" and the source address set as "like 203.0". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.
D. Open the Amazon CloudWatch console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 198.51.100.2" and the source address set as "like 203.0". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.
A company is running a compute workload by using Amazon EC2 Spot Instances that are
in an Auto Scaling group. The launch template uses two placement groups and a single
instance type.
Recently, a monitoring system reported Auto Scaling instance launch failures that
correlated with longer wait times for system users. The company needs to improve the
overall reliability of the workload.
Which solution will meet this requirement?
A. Replace the launch template with a launch configuration to use an Auto Scaling group that uses attribute-based instance type selection.
B. Create a new launch template version that uses attribute-based instance type selection. Configure the Auto Scaling group to use the new launch template version.
C. Update the launch template Auto Scaling group to increase the number of placement groups.
D. Update the launch template to use a larger instance type.
A company runs a customer service center that accepts calls and automatically sends all
customers a managed, interactive, two-way experience survey by text message.
The applications that support the customer service center run on machines that the company hosts in an on-premises data center. The hardware that the company uses is old,
and the company is experiencing downtime with the system. The company wants to
migrate the system to AWS to improve reliability.
Which solution will meet these requirements with the LEAST ongoing operational
overhead?
A. Use Amazon Connect to replace the old call center hardware. Use Amazon Pinpoint to send text message surveys to customers.
B. Use Amazon Connect to replace the old call center hardware. Use Amazon Simple Notification Service (Amazon SNS) to send text message surveys to customers.
C. Migrate the call center software to Amazon EC2 instances that are in an Auto Scaling group. Use the EC2 instances to send text message surveys to customers.
D. Use Amazon Pinpoint to replace the old call center hardware and to send text message surveys to customers.
Explanation: Amazon Connect is a cloud-based contact center service that allows you to set up a virtual call center for your business. It provides an easy-to-use interface for managing customer interactions through voice and chat. Amazon Connect integrates with other AWS services, such as Amazon S3 and Amazon Kinesis, to help you collect, store, and analyze customer data for insights into customer behavior and trends. On the other hand, Amazon Pinpoint is a marketing automation and analytics service that allows you to engage with your customers across different channels, such as email, SMS, push notifications, and voice. It helps you create personalized campaigns based on user behavior and enables you to track user engagement and retention. While both services allow you to communicate with your customers, they serve different purposes. Amazon Connect is focused on customer support and service, while Amazon Pinpoint is focused on marketing and engagement.
A company is running a two-tier web-based application in an on-premises data center. The
application layer consists of a single server running a stateful application. The application
connects to a PostgreSQL database running on a separate server. The application’s user
base is expected to grow significantly, so the company is migrating the application and
database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto
Scaling, and Elastic Load Balancing.
Which solution will provide a consistent user experience that will allow the application and
database tiers to scale?
A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
B. Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.
C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled.
D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
Explanation: Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don't pay for unused provisioned DB instances
A solutions architect wants to cost-optimize and appropriately size Amazon EC2 instances
in a single AWS account. The solutions architect wants to ensure that the instances are
optimized based on CPU, memory, and network metrics.
Which combination of steps should the solutions architect take to meet these
requirements? (Choose two.)
A. Purchase AWS Business Support or AWS Enterprise Support for the account.
B. Turn on AWS Trusted Advisor and review any “Low Utilization Amazon EC2 Instances” recommendations.
C. Install the Amazon CloudWatch agent and configure memory metric collection on the EC2 instances.
D. Configure AWS Compute Optimizer in the AWS account to receive findings and optimization recommendations.
E. Create an EC2 Instance Savings Plan for the AWS Regions, instance families, and operating systems of interest.
Explanation:
AWS Trusted Advisor is a service that provides real-time guidance to help users provision
their resources following AWS best practices1. One of the Trusted Advisor checks is “Low Utilization Amazon EC2 Instances”, which identifies EC2 instances that appear to be
underutilized based on CPU, network I/O, and disk I/O metrics1. This check can help users
optimize the cost and size of their EC2 instances by recommending smaller or more
appropriate instance types.
AWS Compute Optimizer is a service that analyzes the configuration and utilization metrics
of AWS resources and generates optimization recommendations to reduce the cost and
improve the performance of workloads2. Compute Optimizer supports four types of AWS
resources: EC2 instances, EBS volumes, ECS services on AWS Fargate, and Lambda
functions2. For EC2 instances, Compute Optimizer evaluates the vCPUs, memory,
storage, and other specifications, as well as the CPU utilization, network in and out, disk
read and write, and other utilization metrics of currently running instances3. It then
recommends optimal instance types based on price-performance trade-offs.
Option A is incorrect because purchasing AWS Business Support or AWS Enterprise
Support for the account will not directly help with cost-optimization and sizing of EC2
instances. However, these support plans do provide access to more Trusted Advisor
checks than the basic support plan1.
Option C is incorrect because installing the Amazon CloudWatch agent and configuring
memory metric collection on the EC2 instances will not provide any optimization
recommendations by itself. However, memory metrics can be used by Compute Optimizer
to enhance its recommendations if enabled3.
Option E is incorrect because creating an EC2 Instance Savings Plan for the AWS
Regions, instance families, and operating systems of interest will not help with costoptimization
and sizing of EC2 instances. Savings Plans are a flexible pricing model that
offer lower prices on Amazon EC2 usage in exchange for a commitment to a consistent
amount of usage for a 1- or 3-year term4. Savings Plans do not affect the configuration or
utilization of EC2 instances.
Page 19 out of 41 Pages |
Previous |