Topic 2: Exam Pool B
A company is running a web application in a VPC. The web application runs on a group of
Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is using
AWS WAF.
An external customer needs to connect to the web application. The company must provide
IP addresses to all external customers.
Which solution will meet these requirements with the LEAST operational overhead?
A. Replace the ALB with a Network Load Balancer (NLB). Assign an Elastic IP address to the NLB.
B. Allocate an Elastic IP address. Assign the Elastic IP address to the ALProvide the Elastic IP address to the customer.
C. Create an AWS Global Accelerator standard accelerator. Specify the ALB as the accelerator's endpoint. Provide the accelerator's IP addresses to the customer.
D. Configure an Amazon CloudFront distribution. Set the ALB as the origin. Ping the distribution's DNS name to determine the distribution's public IP address. Provide the IP address to the customer.
A large company runs workloads in VPCs that are deployed across hundreds of AWS
accounts. Each VPC consists to public subnets and private subnets that span across
multiple Availability Zones. NAT gateways are deployed in the public subnets and allow
outbound connectivity to the internet from the private subnets.
A solutions architect is working on a hub-and-spoke design. All private subnets in the
spoke VPCs must route traffic to the internet through an egress VPC. The solutions
architect already has deployed a NAT gateway in an egress VPC in a central AWS account.
Which set of additional steps should the solutions architect take to meet these
requirements?
A. Create peering connections between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet.
B. Create a transit gateway, and share it with the existing AWS accounts. Attach existing VPCs to the transit gateway Configure the required routing to allow access to the internet.
C. Create a transit gateway in every account. Attach the NAT gateway to the transit gateways. Configure the required routing to allow access to the internet.
D. Create an AWS PrivateLink connection between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet
A company has a website that runs on Amazon EC2 instances behind an Application Load
Balancer (ALB). The instances are in an Auto Scaling group. The ALB is associated with an
AWS WAF web ACL.
The website often encounters attacks in the application layer. The attacks produce sudden
and significant increases in traffic on the application server. The access logs show that
each attack originates from different IP addresses. A solutions architect needs to
implement a solution to mitigate these attacks.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon CloudWatch alarm that monitors server access. Set a threshold based on access by IP address. Configure an alarm action that adds the IP address to the web ACL’s deny list.
B. Deploy AWS Shield Advanced in addition to AWS WAF. Add the ALB as a protected resource.
C. Create an Amazon CloudWatch alarm that monitors user IP addresses. Set a threshold based on access by IP address. Configure the alarm to invoke an AWS Lambda function to add a deny rule in the application server’s subnet route table for any IP addresses that activate the alarm.
D. Inspect access logs to find a pattern of IP addresses that launched the attacks. Use an Amazon Route 53 geolocation routing policy to deny traffic from the countries that host those IP addresses.
A company hosts a blog post application on AWS using Amazon API Gateway, Amazon
DynamoDB, and AWS Lambda. The application currently does not use
API keys to authorize requests. The API model is as follows:
GET/posts/[postid] to get post details
GET/users[userid] to get user details
GET/comments/[commentid] to get comments details
The company has noticed users are actively discussing topics in the comments section,
and the company wants to increase user engagement by marking the comments appears
in real time.
Which design should be used to reduce comment latency and improve user experience?
A. Use edge-optimized API with Amazon CloudFront to cache API responses.
B. Modify the blog application code to request GET comment[commented] every 10 seconds.
C. Use AWS AppSync and leverage WebSockets to deliver comments.
D. Change the concurrency limit of the Lambda functions to lower the API response time.
A company has developed a hybrid solution between its data center and AWS. The
company uses Amazon VPC and Amazon EC2 instances that send application togs to
Amazon CloudWatch. The EC2 instances read data from multiple relational databases that
are hosted on premises.
The company wants to monitor which EC2 instances are connected to the databases in
near-real time. The company already has a monitoring solution that uses Splunk on
premises. A solutions architect needs to determine how to send networking traffic to
Splunk.
How should the solutions architect meet these requirements?
A. Enable VPC flows logs, and send them to CloudWatch. Create an AWS Lambda
function to periodically export the CloudWatch logs to an Amazon S3 bucket by using the
pre-defined export function. Generate ACCESS_KEY and SECRET_KEY AWS credentials.
Configure Splunk to pull the logs from the S3 bucket by using those credentials.
B. Create an Amazon Kinesis Data Firehose delivery stream with Splunk as the destination. Configure a pre-processing AWS Lambda function with a Kinesis Data Firehose stream processor that extracts individual log events from records sent by CloudWatch Logs subscription filters. Enable VPC flows logs, and send them to CloudWatch. Create a CloudWatch Logs subscription that sends log events to the Kinesis Data Firehose delivery stream.
C. Ask the company to log every request that is made to the databases along with the EC2 instance IP address. Export the CloudWatch logs to an Amazon S3 bucket. Use Amazon Athena to query the logs grouped by database name. Export Athena results to another S3 bucket. Invoke an AWS Lambda function to automatically send any new file that is put in the S3 bucket to Splunk.
D. Send the CloudWatch logs to an Amazon Kinesis data stream with Amazon Kinesis Data Analytics for SOL Applications. Configure a 1 -minute sliding window to collect the events. Create a SQL query that uses the anomaly detection template to monitor any networking traffic anomalies in near-real time. Send the result to an Amazon Kinesis Data Firehose delivery stream with Splunk as the destination.
A company is deploying a new web-based application and needs a storage solution for the
Linux application servers. The company wants to create a single location for updates to
application data for all instances. The active dataset will be up to 100 GB in size. A
solutions architect has determined that peak operations will occur for 3 hours daily and will
require a total of 225 MiBps of read throughput.
The solutions architect must design a Multi-AZ solution that makes a copy of the data
available in another AWS Region for disaster recovery (DR). The DR copy has an RPO of
less than 1 hour.
Which solution will meet these requirements?
A. Deploy a new Amazon Elastic File System (Amazon EFS) Multi-AZ file system.
Configure the file system for 75 MiBps of provisioned throughput. Implement
replication to a file system in the DR Region.
B. Deploy a new Amazon FSx for Lustre file system. Configure Bursting Throughput mode for the file system. Use AWS Backup to back up the file system to the DR Region.
C. Deploy a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume with 225 MiBps of throughput. Enable Multi-Attach for the EBS volume. Use AWS Elastic Disaster Recovery to replicate the EBS volume to the DR Region.
D. Deploy an Amazon FSx for OpenZFS file system in both the production Region and the DR Region. Create an AWS DataSync scheduled task to replicate the data from the production file system to the DR file system every 10 minutes.
Explanation: The company should deploy a new Amazon Elastic File System (Amazon
EFS) Multi-AZ file system. The company should configure the file system for 75 MiBps of
provisioned throughput. The company should implement replication to a file system in the
DR Region. This solution will meet the requirements because Amazon EFS is a serverless,
fully elastic file storage service that lets you share file data without provisioning or managing storage capacity and performance. Amazon EFS is built to scale on demand to
petabytes without disrupting applications, growing and shrinking automatically as you add
and remove files1. By deploying a new Amazon EFS Multi-AZ file system, the company
can create a single location for updates to application data for all instances. A Multi-AZ file
system replicates data across multiple Availability Zones (AZs) within a Region, providing
high availability and durability2. By configuring the file system for 75 MiBps of provisioned
throughput, the company can ensure that it meets the peak operations requirement of 225
MiBps of read throughput. Provisioned throughput is a feature that enables you to specify a
level of throughput that the file system can drive independent of the file system’s size or
burst credit balance3. By implementing replication to a file system in the DR Region, the
company can make a copy of the data available in another AWS Region for disaster
recovery. Replication is a feature that enables you to replicate data from one EFS file
system to another EFS file system across AWS Regions. The replication process has an
RPO of less than 1 hour.
The other options are not correct because:
Deploying a new Amazon FSx for Lustre file system would not provide a single
location for updates to application data for all instances. Amazon FSx for Lustre is
a fully managed service that provides cost-effective, high-performance storage for
compute workloads. However, it does not support concurrent write access from
multiple instances. Using AWS Backup to back up the file system to the DR
Region would not provide real-time replication of data. AWS Backup is a service
that enables you to centralize and automate data protection across AWS services.
However, it does not support continuous data replication or cross-Region disaster
recovery.
Deploying a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon
EBS) volume with 225 MiBps of throughput would not provide a single location for
updates to application data for all instances. Amazon EBS is a service that
provides persistent block storage volumes for use with Amazon EC2 instances.
However, it does not support concurrent access from multiple instances, unless
Multi-Attach is enabled. Enabling Multi-Attach for the EBS volume would not
provide Multi-AZ resilience or cross-Region replication. Multi-Attach is a feature
that enables you to attach an EBS volume to multiple EC2 instances within the
same Availability Zone. Using AWS Elastic Disaster Recovery to replicate the EBS
volume to the DR Region would not provide real-time replication of data. AWS
Elastic Disaster Recovery (AWS DRS) is a service that enables you to orchestrate
and automate disaster recovery workflows across AWS Regions. However, it does
not support continuous data replication or sub-hour RPOs.
Deploying an Amazon FSx for OpenZFS file system in both the production Region
and the DR Region would not be as simple or cost-effective as using Amazon
EFS. Amazon FSx for OpenZFS is a fully managed service that provides highperformance
storage with strong data consistency and advanced data
management features for Linux workloads. However, it requires more configuration
and management than Amazon EFS, which is serverless and fully elastic. Creating
an AWS DataSync scheduled task to replicate the data from the production file
system to the DR file system every 10 minutes would not provide real-time
replication of data. AWS DataSync is a service that enables you to transfer data between on-premises storage and AWS services, or between AWS services.
However, it does not support continuous data replication or sub-minute RPOs.
A company runs an intranet application on premises. The company wants to configure a
cloud backup of the application. The company has selected AWS Elastic Disaster
Recovery for this solution.
The company requires that replication traffic does not travel through the public internet. The
application also must not be accessible from the internet. The company does not want this
solution to consume all available network bandwidth because other applications require
bandwidth.
Which combination of steps will meet these requirements? (Select THREE.)
A. Create a VPC that has at least two private subnets, two NAT gateways, and a virtual private gateway.
B. Create a VPC that has at least two public subnets, a virtual private gateway, and an internet gateway.
C. Create an AWS Site-to-Site VPN connection between the on-premises network and the target AWS network.
D. Create an AWS Direct Connect connection and a Direct Connect gateway between the on-premises network and the target AWS network.
E. During configuration of the replication servers, select the option to use private IP addresses for data replication.
F. During configuration of the launch settings for the target servers, select the option to ensure that the Recovery instance's private IP address matches the source server's private IP address.
A solutions architect is planning to migrate critical Microsoft SOL Server databases to
AWS. Because the databases are legacy systems, the solutions architect will move the
databases to a modern data architecture. The solutions architect must migrate the
databases with near-zero downtime.
Which solution will meet these requirements?
A. Use AWS Application Migration Service and the AWS Schema Conversion Tool (AWS SCT). Perform an In-place upgrade before the migration. Export the migrated data to Amazon Aurora Serverless after cutover. Repoint the applications to Amazon Aurora.
B. Use AWS Database Migration Service (AWS DMS) to Rehost the database. Set Amazon S3 as a target. Set up change data capture (CDC) replication. When the source and destination are fully synchronized, load the data from Amazon S3 into an Amazon RDS for Microsoft SQL Server DB Instance.
C. Use native database high availability tools Connect the source system to an Amazon RDS for Microsoft SQL Server DB instance Configure replication accordingly. When data replication is finished, transition the workload to an Amazon RDS for Microsoft SQL Server DB instance.
D. Use AWS Application Migration Service. Rehost the database server on Amazon EC2. When data replication is finished, detach the database and move the database to an Amazon RDS for Microsoft SQL Server DB instance. Reattach the database and then cut over all networking.
Explanation: AWS DMS can migrate data from a source database to a target database in AWS, using change data capture (CDC) to replicate ongoing changes and keep the databases in sync. Setting Amazon S3 as a target allows storing the migrated data in a durable and costeffective storage service. When the source and destination are fully synchronized, the data can be loaded from Amazon S3 into an Amazon RDS for Microsoft SQL Server DB instance, which is a managed database service that simplifies database administration tasks.
A company uses AWS Organizations to manage more than 1.000 AWS accounts. The
company has created a new developer organization. There are 540 developer member
accounts that must be moved to the new developer organization. All accounts are set up
with all the required Information so that each account can be operated as a standalone
account.
Which combination of steps should a solutions architect take to move all of the developer
accounts to the new developer organization? (Select THREE.)
A. Call the MoveAccount operation in the Organizations API from the old organization's management account to migrate the developer accounts to the new developer organization.
B. From the management account, remove each developer account from the old organization using the RemoveAccountFromOrganization operation in the Organizations API.
C. From each developer account, remove the account from the old organization using the RemoveAccountFromOrganization operation in the Organizations API.
D. Sign in to the new developer organization's management account and create a placeholder member account that acts as a target for the developer account migration.
E. Call the InviteAccountToOrganization operation in the Organizations API from the new developer organization's management account to send invitations to the developer accounts.
F. Have each developer sign in to their account and confirm to join the new developer organization.
A solutions architect is designing a solution to process events. The solution must have the
ability to scale in and out based on the number of events that the solution receives. If a
processing error occurs, the event must move into a separate queue for review.
Which solution will meet these requirements?
A. Send event details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure an AWS Lambda function as a subscriber to the SNS topic to process the events. Add an on-failure destination to the function. Set an Amazon Simple Queue Service (Amazon SQS) queue as the target.
B. Publish events to an Amazon Simple Queue Service (Amazon SQS) queue. Create an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to scale in and out based on the ApproximateAgeOfOldestMessage metric of the queue. Configure the application to write failed messages to a dead-letter queue.
C. Write events to an Amazon DynamoDB table. Configure a DynamoDB stream for the table. Configure the stream to invoke an AWS Lambda function. Configure the Lambda function to process the events.
D. Publish events to an Amazon EventBridge event bus. Create and run an application on an Amazon EC2 instance with an Auto Scaling group that is behind an Application Load Balancer (ALB). Set the ALB as the event bus target. Configure the event bus to retry events. Write messages to a dead-letter queue if the application cannot process the messages.
Explanation:
Amazon Simple Notification Service (Amazon SNS) is a fully managed pub/sub messaging service that enables users to send messages to multiple subscribers1. Users can send
event details to an Amazon SNS topic and configure an AWS Lambda function as a
subscriber to the SNS topic to process the events. Lambda is a serverless compute service
that runs code in response to events and automatically manages the underlying compute
resources2. Users can add an on-failure destination to the function and set an Amazon
Simple Queue Service (Amazon SQS) queue as the target. Amazon SQS is a fully
managed message queuing service that enables users to decouple and scale
microservices, distributed systems, and serverless applications3. This way, if a processing
error occurs, the event will move into the separate queue for review.
Option B is incorrect because publishing events to an Amazon SQS queue and creating an
Amazon EC2 Auto Scaling group will not have the ability to scale in and out based on the
number of events that the solution receives. Amazon EC2 is a web service that provides
secure, resizable compute capacity in the cloud. Auto Scaling is a feature that helps users
maintain application availability and allows them to scale their EC2 capacity up or down
automatically according to conditions they define. However, for this use case, using SQS
and EC2 will not take advantage of the serverless capabilities of Lambda and SNS.
Option C is incorrect because writing events to an Amazon DynamoDB table and
configuring a DynamoDB stream for the table will not have the ability to move events into a
separate queue for review if a processing error occurs. Amazon DynamoDB is a fully
managed key-value and document database that delivers single-digit millisecond
performance at any scale. DynamoDB Streams is a feature that captures data modification
events in DynamoDB tables. Users can configure the stream to invoke a Lambda function,
but they cannot configure an on-failure destination for the function.
Option D is incorrect because publishing events to an Amazon EventBridge event bus and
setting an Application Load Balancer (ALB) as the event bus target will not have the ability
to move events into a separate queue for review if a processing error occurs. Amazon
EventBridge is a serverless event bus service that makes it easy to connect applications
with data from a variety of sources. An ALB is a load balancer that distributes incoming
application traffic across multiple targets, such as EC2 instances, containers, IP addresses,
Lambda functions, and virtual appliances. Users can configure EventBridge to retry events,
but they cannot configure an on-failure destination for the ALB.
A company needs to optimize the cost of an AWS environment that contains multiple
accounts in an organization in AWS Organizations. The company conducted cost optimization activities 3 years ago and purchased Amazon EC2 Standard Reserved
Instances that recently expired.
The company needs EC2 instances for 3 more years. Additionally, the company has
deployed a new serverless workload.
Which strategy will provide the company with the MOST cost savings?
A. Purchase the same Reserved Instances for an additional 3-year term with All Upfront payment. Purchase a 3-year Compute Savings Plan with All Upfront payment in the management account to cover any additional compute costs.
B. Purchase a I-year Compute Savings Plan with No Upfront payment in each member account. Use the Savings Plans recommendations in the AWS Cost Management console to choose the Compute Savings Plan.
C. Purchase a 3-year EC2 Instance Savings Plan with No Upfront payment in the management account to cover EC2 costs in each AWS Region. Purchase a 3- year Compute Savings Plan with No Upfront payment in the management account to cover any additional compute costs.
D. Purchase a 3-year EC2 Instance Savings Plan with All Upfront payment in each member account. Use the Savings Plans recommendations in the AWS Cost Management console to choose the EC2 Instance Savings Plan.
Explanation:
The company should purchase the same Reserved Instances for an additional 3-year term
with All Upfront payment. The company should purchase a 3-year Compute Savings Plan
with All Upfront payment in the management account to cover any additional compute
costs. This solution will provide the company with the most cost savings because Reserved
Instances and Savings Plans are both pricing models that offer significant discounts
compared to On-Demand pricing. Reserved Instances are commitments to use a specific
instance type and size in a single Region for a one- or three-year term. You can choose
between three payment options: No Upfront, Partial Upfront, or All Upfront. The more you
pay upfront, the greater the discount1. Savings Plans are flexible pricing models that offer
low prices on EC2 instances, Fargate, and Lambda usage, in exchange for a commitment
to a consistent amount of usage (measured in $/hour) for a one- or three-year term. You
can choose between two types of Savings Plans: Compute Savings Plans and EC2
Instance Savings Plans. Compute Savings Plans apply to any EC2 instance regardless of
Region, instance family, operating system, or tenancy, including those that are part of
EMR, ECS, or EKS clusters, or launched by Fargate or Lambda. EC2 Instance Savings
Plans apply to a specific instance family within a Region and provide the most savings2. By
purchasing the same Reserved Instances for an additional 3-year term with All Upfront payment, the company can lock in the lowest possible price for its EC2 instances that run
continuously for 3 years. By purchasing a 3-year Compute Savings Plan with All Upfront
payment in the management account, the company can benefit from additional discounts
on any other compute usage across its member accounts.
The other options are not correct because:
Purchasing a 1-year Compute Savings Plan with No Upfront payment in each
member account would not provide as much cost savings as purchasing a 3-year
Compute Savings Plan with All Upfront payment in the management account. A 1-
year term offers lower discounts than a 3-year term, and a No Upfront payment
option offers lower discounts than an All Upfront payment option. Also, purchasing
a Savings Plan in each member account would not allow the company to share the
benefits of unused Savings Plan discounts across its organization.
Purchasing a 3-year EC2 Instance Savings Plan with No Upfront payment in the
management account to cover EC2 costs in each AWS Region would not provide
as much cost savings as purchasing Reserved Instances for an additional 3-year
term with All Upfront payment. An EC2 Instance Savings Plan offers lower
discounts than Reserved Instances for the same instance family and Region. Also,
a No Upfront payment option offers lower discounts than an All Upfront payment
option.
Purchasing a 3-year EC2 Instance Savings Plan with All Upfront payment in each
member account would not provide as much flexibility or cost savings as
purchasing a 3-year Compute Savings Plan with All Upfront payment in the
management account. An EC2 Instance Savings Plan applies only to a specific
instance family within a Region and does not cover Fargate or Lambda usage.
Also, purchasing a Savings Plan in each member account would not allow the
company to share the benefits of unused Savings Plan discounts across its
organization.
A software-as-a-service (SaaS) provider exposes APIs through an Application Load
Balancer (ALB). The ALB connects to an Amazon Elastic Kubernetes Service (Amazon
EKS) cluster that is deployed in the us-east-I Region. The exposed APIs contain usage of a
few non-standard REST methods: LINK, UNLINK, LOCK, and UNLOCK.
Users outside the United States are reporting long and inconsistent response times for
these APIs. A solutions architect needs to resolve this problem with a solution that
minimizes operational overhead.
Which solution meets these requirements?
A. Add an Amazon CloudFront distribution. Configure the ALB as the origin.
B. Add an Amazon API Gateway edge-optimized API endpoint to expose the APIs.
Configure the ALB as the target.
C. Add an accelerator in AWS Global Accelerator. Configure the ALB as the origin.
D. Deploy the APIs to two additional AWS Regions: eu-west-l and ap-southeast-2. Add latency-based routing records in Amazon Route 53.
Explanation: Adding an accelerator in AWS Global Accelerator will enable improving the performance of the APIs for local and global users1. AWS Global Accelerator is a service that uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies1. Configuring the ALB as the origin will enable connecting the accelerator to the ALB that exposes the APIs2. AWS Global Accelerator supports non-standard REST methods such as LINK, UNLINK, LOCK, and UNLOCK3.
Page 16 out of 41 Pages |
Previous |