Topic 2: Exam Pool B
A company has set up its entire infrastructure on AWS. The company uses Amazon EC2
instances to host its ecommerce website and uses Amazon S3 to store static data. Three
engineers at the company handle the cloud administration and development through one
AWS account. Occasionally, an engineer alters an EC2 security group configuration of
another engineer and causes noncompliance issues in the environment.
A solutions architect must set up a system that tracks changes that the engineers make.
The system must send alerts when the engineers make noncompliant changes to the
security settings for the EC2 instances.
What is the FASTEST way for the solutions architect to meet these requirements?
A. Set up AWS Organizations for the company. Apply SCPs to govern and track noncompliant security group changes that are made to the AWS account.
B. Enable AWS CloudTrail to capture the changes to EC2 security groups. Enable Amazon CtoudWatch rules to provide alerts when noncompliant security settings are detected.
C. Enable SCPs on the AWS account to provide alerts when noncompliant security group changes are made to the environment.
D. Enable AWS Config on the EC2 security groups to track any noncompliant changes Send the changes as alerts through an Amazon Simple Notification Service (Amazon SNS) topic.
A company uses a Grafana data visualization solution that runs on a single Amazon EC2
instance to monitor the health of the company's AWS workloads. The company has invested time and effort to create dashboards that the company wants to preserve. The
dashboards need to be highly available and cannot be down for longer than 10 minutes.
The company needs to minimize ongoing maintenance.
Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate to Amazon CloudWatch dashboards. Recreate the dashboards to match the existing Grafana dashboards. Use automatic dashboards where possible.
B. Create an Amazon Managed Grafana workspace. Configure a new Amazon CloudWatch data source. Export dashboards from the existing Grafana instance. Import the dashboards into the new workspace.
C. Create an AMI that has Grafana pre-installed. Store the existing dashboards in Amazon Elastic File System (Amazon EFS). Create an Auto Scaling group that uses the new AMI. Set the Auto Scaling group's minimum, desired, and maximum number of instances to one. Create an Application Load Balancer that serves at least two Availability Zones.
D. Configure AWS Backup to back up the EC2 instance that runs Grafana once each hour. Restore the EC2 instance from the most recent snapshot in an alternate Availability Zone when required.
Explanation: By creating an AMI that has Grafana pre-installed and storing the existing dashboards in Amazon Elastic File System (Amazon EFS) it allows for faster and more efficient scaling, and by creating an Auto Scaling group that uses the new AMI and setting the Auto Scaling group's minimum, desired, and maximum number of instances to one and creating an Application Load Balancer that serves at least two Availability Zones, it ensures high availability and minimized downtime.
A company has an application in the AWS Cloud. The application runs on a fleet of 20
Amazon EC2 instances. The EC2 instances are persistent and store data on multiple
attached Amazon Elastic Block Store (Amazon EBS) volumes.
The company must maintain backups in a separate AWS Region. The company must be
able to recover the EC2 instances and their configuration within I business day, with loss of
no more than I day's worth of data. The company has limited staff and needs a backup
solution that optimizes operational efficiency and cost. The company already has created
an AWS CloudFormation template that can deploy the required network configuration in a secondary Region.
Which solution will meet these requirements?
A. Create a second CloudFormation template that can recreate the EC2 instances in the secondary Region. Run daily multivolume snapshots by using AWS Systems Manager Automation runbooks. Copy the snapshots to the secondary Region. In the event of a failure, launch the CloudFormation templates, restore the EBS volumes from snapshots, and transfer usage to the secondary Region.
B. Use Amazon Data Lifecycle Manager (Amazon DLM) to create daily multivolume snapshots of the EBS volumes. In the event of a failure, launch the CloudFormation template and use Amazon DLM to restore the EBS volumes and transfer usage to the secondary Region.
C. Use AWS Backup to create a scheduled daily backup plan for the EC2 instances. Configure the backup task to copy the backups to a vault in the secondary Region. In the event of a failure, launch the CloudFormation template, restore the instance volumes and configurations from the backup vault, and transfer usage to the secondary Region.
D. Deploy EC2 instances of the same size and configuration to the secondary Region. Configure AWS DataSync daily to copy data from the primary Region to the secondary Region. In the event of a failure, launch the CloudFormation template and transfer usage to the secondary Region.
Explanation: Using AWS Backup to create a scheduled daily backup plan for the EC2 instances will enable taking snapshots of the EC2 instances and their attached EBS volumes1. Configuring the backup task to copy the backups to a vault in the secondary Region will enable maintaining backups in a separate Region1. In the event of a failure, launching the CloudFormation template will enable deploying the network configuration in the secondary Region2. Restoring the instance volumes and configurations from the backup vault will enable recovering the EC2 instances and their data1. Transferring usage to the secondary Region will enable resuming operations2.
A company ingests and processes streaming market data. The data rate is constant. A
nightly process that calculates aggregate statistics is run, and each execution takes about
4 hours to complete. The statistical analysis is not mission critical to the business, and
previous data points are picked up on the next execution if a particular run fails.
The current architecture uses a pool of Amazon EC2 Reserved Instances with 1-year
reservations running full time to ingest and store the streaming data in attached Amazon
EBS volumes. On-Demand EC2 instances are launched each night to perform the nightly
processing, accessing the stored data from NFS shares on the ingestion servers, and
terminating the nightly processing servers when complete. The Reserved Instance
reservations are expiring, and the company needs to determine whether to purchase new
reservations or implement a new design.
Which is the most cost-effective design?
A. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use a scheduled script to launch a fleet of EC2 On-Demand Instances each night to perform the batch processing of the S3 data. Configure the script to terminate the instances when the processing is complete.
B. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use AWS Batch with Spot Instances to perform nightly processing with a maximum Spot price that is 50% of the On-Demand price.
C. Update the ingestion process to use a fleet of EC2 Reserved Instances with 3-year reservations behind a Network Load Balancer. Use AWS Batch with Spot Instances to perform nightly processing with a maximum Spot price that is 50% of the On- Demand price.
D. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon Redshift. Use Amazon EventBridge to schedule an AWS Lambda function to run nightly to query Amazon Redshift to generate the daily statistics.
Explanation: Updating the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3 will reduce the need for EC2 instances and EBS volumes for data storage1. Using AWS Batch with Spot Instances to perform nightly processing will leverage the cost savings of Spot Instances, which are up to 90% cheaper than On-Demand Instances2. AWS Batch will also handle the scheduling and scaling of the processing jobs. Setting the maximum Spot price to 50% of the On-Demand price will reduce the chances of interruption and ensure that the processing is cost-effective.
A company is migrating a document processing workload to AWS. The company has
updated many applications to natively use the Amazon S3 API to store, retrieve, and
modify documents that a processing server generates at a rate of approximately 5
documents every second. After the document processing is finished, customers can
download the documents directly from Amazon S3.
During the migration, the company discovered that it could not immediately update the
processing server that generates many documents to support the S3 API. The server runs
on Linux and requires fast local access to the files that the server generates and modifies.
When the server finishes processing, the files must be available to the public for download
within 30 minutes.
Which solution will meet these requirements with the LEAST amount of effort?
A. Migrate the application to an AWS Lambda function. Use the AWS SDK for Java to generate, modify, and access the files that the company stores directly in Amazon S3.
B. Set up an Amazon S3 File Gateway and configure a file share that is linked to the document store. Mount the file share on an Amazon EC2 instance by using NFS. When changes occur in Amazon S3, initiate a RefreshCache API call to update the S3 File Gateway.
C. Configure Amazon FSx for Lustre with an import and export policy. Link the new file system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
D. Configure AWS DataSync to connect to an Amazon EC2 instance. Configure a task to synchronize the generated files to and from Amazon S3.
Explanation:
The company should configure Amazon FSx for Lustre with an import and export policy.
The company should link the new file system to an S3 bucket. The company should install
the Lustre client and mount the document store to an Amazon EC2 instance by using
NFS. This solution will meet the requirements with the least amount of effort because
Amazon FSx for Lustre is a fully managed service that provides a high-performance file
system optimized for fast processing of workloads such as machine learning, high
performance computing, video processing, financial modeling, and electronic design
automation1. Amazon FSx for Lustre can be linked to an S3 bucket and can import data
from and export data to the bucket2. The import and export policy can be configured to
automatically import new or changed objects from S3 and export new or changed files to
S33. This will ensure that the files are available to the public for download within 30
minutes. Amazon FSx for Lustre supports NFS version 3.0 protocol for Linux clients.
The other options are not correct because:
Migrating the application to an AWS Lambda function would require a lot of effort
and may not be feasible for the existing server that generates many documents.
Lambda functions have limitations on execution time, memory, disk space, and network bandwidth.
Setting up an Amazon S3 File Gateway would not work because S3 File Gateway
does not support write-back caching, which means that files written to the file
share are uploaded to S3 immediately and are not available locally until they are
downloaded again. This would not provide fast local access to the files that the
server generates and modifies.
Configuring AWS DataSync to connect to an Amazon EC2 instance would not
meet the requirement of making the files available to the public for download within
30 minutes. DataSync is a service that transfers data between on-premises
storage systems and AWS storage services over the internet or AWS Direct
Connect. DataSync tasks can be scheduled to run at specific times or intervals,
but they are not triggered by file changes.
A solutions architect at a large company needs to set up network security tor outbound
traffic to the internet from all AWS accounts within an organization in AWS Organizations.
The organization has more than 100 AWS accounts, and the accounts route to each other
by using a centralized AWS Transit Gateway. Each account has both an internet gateway
and a NAT gateway tor outbound traffic to the internet The company deploys resources
only into a single AWS Region.
The company needs the ability to add centrally managed rule-based filtering on all
outbound traffic to the internet for all AWS accounts in the organization. The peak load of
outbound traffic will not exceed 25 Gbps in each Availability Zone.
Which solution meets these requirements?
A. Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based filtering across all Availability Zones in the Region. Modify all default routes to point to the proxy's Auto Scaling group.
B. Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Use an AWS Network Firewall firewall for rule-based filtering. Create Network Firewall endpoints in each Availability Zone. Modify all default routes to point to the Network Firewall endpoints.
C. Create an AWS Network Firewall firewall for rule-based filtering in each AWS account. Modify all default routes to point to the Network Firewall firewalls in each account.
D. In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for rule-based filtering. Modify all default routes to point to the proxy's Auto Scaling group.
A company is migrating its development and production workloads to a new organization in
AWS Organizations. The company has created a separate member account for
development and a separate member account for production. Consolidated billing is linked
to the management account. In the management account, a solutions architect needs to
create an 1AM user that can stop or terminate resources in both member accounts.
Which solution will meet this requirement?
A. Create an IAM user and a cross-account role in the management account. Configure the cross-account role with least privilege access to the member accounts.
B. Create an IAM user in each member account. In the management account, create a cross-account role that has least privilege access. Grant the IAM users access to the cross-account role by using a trust policy.
C. Create an IAM user in the management account. In the member accounts, create an IAM group that has least privilege access. Add the IAM user from the management account to each IAM group in the member accounts.
D. Create an IAM user in the management account. In the member accounts, create crossaccount roles that have least privilege access. Grant the IAM user access to the roles by using a trust policy.
Explanation: Cross account role should be created in destination(member) account. The role has trust entity to master account.
A company is migrating a document processing workload to AWS. The company has
updated many applications to natively use the Amazon S3 API to store, retrieve, and
modify documents that a processing server generates at a rate of approximately 5
documents every second. After the document processing is finished, customers can
download the documents directly from Amazon S3.
During the migration, the company discovered that it could not immediately update the
processing server that generates many documents to support the S3 API. The server runs
on Linux and requires fast local access to the files that the server generates and modifies.
When the server finishes processing, the files must be available to the public for download
within 30 minutes.
Which solution will meet these requirements with the LEAST amount of effort?
A. Migrate the application to an AWS Lambda function. Use the AWS SDK for Java to generate, modify, and access the files that the company stores directly in Amazon S3.
B. Set up an Amazon S3 File Gateway and configure a file share that is linked to the document store. Mount the file share on an Amazon EC2 instance by using NFS. When changes occur in Amazon S3, initiate a RefreshCache API call to update the S3 File Gateway.
C. Configure Amazon FSx for Lustre with an import and export policy. Link the new file system to an S3 bucket. Install the Lustre client and mount the document store to an Amazon EC2 instance by using NFS.
D. Configure AWS DataSync to connect to an Amazon EC2 instance. Configure a task to synchronize the generated files to and from Amazon S3.
Explanation: Amazon FSx for Lustre is a fully managed service that provides costeffective, high-performance, scalable storage for compute workloads. Powered by Lustre, the world’s most popular high-performance file system, FSx for Lustre offers shared storage with sub-ms latencies, up to terabytes per second of throughput, and millions of IOPS. FSx for Lustre file systems can also be linked to Amazon Simple Storage Service (S3) buckets, allowing you to access and process data concurrently from both a highperformance file system and from the S3 API.
A manufacturing company is building an inspection solution for its factory. The company
has IP cameras at the end of each assembly line. The company has used Amazon
SageMaker to train a machine learning (ML) model to identify common defects from still
images.
The company wants to provide local feedback to factory workers when a defect is detected.
The company must be able to provide this feedback even if the factory’s internet
connectivity is down. The company has a local Linux server that hosts an API that provides
local feedback to the workers.
How should the company deploy the ML model to meet these requirements?
A. Set up an Amazon Kinesis video stream from each IP camera to AWS. Use Amazon EC2 instances to take still images of the streams. Upload the images to an Amazon S3 bucket. Deploy a SageMaker endpoint with the ML model. Invoke an AWS Lambda function to call the inference endpoint when new images are uploaded. Configure the Lambda function to call the local API when a defect is detected.
B. Deploy AWS IoT Greengrass on the local server. Deploy the ML model to the Greengrass server. Create a Greengrass component to take still images from the cameras and run inference. Configure the component to call the local API when a defect is detected.
C. Order an AWS Snowball device. Deploy a SageMaker endpoint the ML model and an Amazon EC2 instance on the Snowball device. Take still images from the cameras. Run inference from the EC2 instance. Configure the instance to call the local API when a defect is detected.
D. Deploy Amazon Monitron devices on each IP camera. Deploy an Amazon Monitron Gateway on premises. Deploy the ML model to the Amazon Monitron devices. Use Amazon Monitron health state alarms to call the local API from an AWS Lambda function when a defect is detected.
A company has multiple business units that each have separate accounts on AWS. Each
business unit manages its own network with several VPCs that have CIDR ranges that
overlap. The company’s marketing team has created a new internal application and wants
to make the application accessible to all the other business units. The solution must use
private IP addresses only.
Which solution will meet these requirements with the LEAST operational overhead?
A. Instruct each business unit to add a unique secondary CIDR range to the business unit's VPC. Peer the VPCs and use a private NAT gateway in the secondary range to route traffic to the marketing team.
B. Create an Amazon EC2 instance to serve as a virtual appliance in the marketing account's VPC. Create an AWS Site-to-Site VPN connection between the marketing team and each business unit's VPC. Perform NAT where necessary.
C. Create an AWS PrivateLink endpoint service to share the marketing application. Grant permission to specific AWS accounts to connect to the service. Create interface VPC endpoints in other accounts to access the application by using private IP addresses.
D. Create a Network Load Balancer (NLB) in front of the marketing application in a private subnet. Create an API Gateway API. Use the Amazon API Gateway private integration to connect the API to the NLB. Activate IAM authorization for the API. Grant access to the accounts of the other business units.
Explanation: With AWS PrivateLink, the marketing team can create an endpoint service to
share their internal application with other accounts securely using private IP addresses.
They can grant permission to specific AWS accounts to connect to the service and create
interface VPC endpoints in the other accounts to access the application by using private IP
addresses. This option does not require any changes to the network of the other business
units, and it does not require peering or NATing. This solution is both scalable and secure.
A solutions architect needs to define a reference architecture for a solution for three-tier
applications with web. application, and NoSQL data layers. The reference architecture
must meet the following requirements:
• High availability within an AWS Region
• Able to fail over in 1 minute to another AWS Region for disaster recovery
• Provide the most efficient solution while minimizing the impact on the user experience
Which combination of steps will meet these requirements? (Select THREE.)
A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 1 hour.
B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
C. Use a global table within Amazon DynamoDB so data can be accessed in the two selected Regions.
D. Back up data from an Amazon DynamoDB table in the primary Region every 60 minutes and then write the data to Amazon S3. Use S3 Cross-Region replication to copy the data from the primary Region to the disaster recovery Region. Have a script import the data into DynamoDB in a disaster recovery scenario.
E. Implement a hot standby model using Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
F. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
Explanation: The requirements can be achieved by using an Amazon DynamoDB database with a global table. DynamoDB is a NoSQL database so it fits the requirements. A global table also allows both reads and writes to occur in both Regions. For the web and application tiers Auto Scaling groups should be configured. Due to the 1-minute RTO these must be configured in an active/passive state. The best pricing model to lower price but ensure resources are available when needed is to use a combination of zonal reserved instances and on-demand instances. To failover between the Regions, a Route 53 failover routing policy can be configured with a TTL configured on the record of 30 seconds. This will mean clients must resolve against Route 53 every 30 seconds to get the latest record. In a failover scenario the clients would be redirected to the secondary site if the primary site is unhealthy.
A company plans to migrate a three-tiered web application from an on-premises data
center to AWS The company developed the Ui by using server-side JavaScript libraries
The business logic and API tier uses a Python-based web framework The data tier runs on
a MySQL database
The company custom built the application to meet business requirements The company
does not want to re-architect the application The company needs a solution to replatform
the application to AWS with the least possible amount of development The solution needs
to be highly available and must reduce operational overhead
Which solution will meet these requirements?
A. Deploy the UI to a static website on Amazon S3 Use Amazon CloudFront to deliver the website Build the business logic in a Docker image Store the image in Amazon Elastic Container Registry (Amazon ECR) Use Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type to host the website with an Application Load Balancer in front Deploy the data layer to an Amazon Aurora MySQL DB cluster
B. Build the UI and business logic in Docker images Store the images in Amazon Elastic Container Registry (Amazon ECR) Use Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type to host the UI and business logic applications with an Application Load Balancer in front Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance
C. Deploy the UI to a static website on Amazon S3 Use Amazon CloudFront to deliver the website Convert the business logic to AWS Lambda functions Integrate the functions with Amazon API Gateway Deploy the data layer to an Amazon Aurora MySQL DB cluster
D. Build the UI and business logic in Docker images Store the images in Amazon Elastic Container Registry (Amazon ECR) Use Amazon Elastic Kubernetes Service (Amazon EKS) with Fargate profiles to host the UI and business logic Use AWS Database Migration Service (AWS DMS) to migrate the data layer to Amazon DynamoDB
Explanation: This solution utilizes Amazon S3 and CloudFront to deploy the UI as a static
website, which can be done with minimal development effort. The business logic and API
tier can be containerized in a Docker image and stored in Amazon Elastic Container
Registry (ECR) and run on Amazon Elastic Container Service (ECS) with the Fargate
launch type, which allows the application to be highly available with minimal operational
overhead. The data layer can be deployed on an Amazon Aurora MySQL DB cluster which
is a fully managed relational database service.
Amazon Aurora provides high availability and performance for the data layer without the
need for managing the underlying infrastructure.
Page 14 out of 41 Pages |
Previous |