Topic 5: Exam Pool E
A company needs to create an AWS Lambda function that will run in a VPC in the company's primary AWS account. The Lambda function needs to access files that the company stores in an Amazon Elastic File System (Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files to the file system the solution must scale to meet the demand. Which solution will meet these requirements MOST cost-effectively?
A. Create a new EPS file system in the primary account Use AWS DataSync to copy the contents of the original EPS file system to the new EPS file system
B. Create a VPC peering connection between the VPCs that are in the primary account and the secondary account
C. Create a second Lambda function In the secondary account that has a mount that is configured for the file system. Use the primary account's Lambda function to invoke the secondary account's Lambda function
D. Move the contents of the file system to a Lambda Layer’s Configure the Lambda layer's permissions to allow the company's secondary account to use the Lambda layer.
Explanation: This option is the most cost-effective and scalable way to allow the Lambda function in the primary account to access the EFS file system in the secondary account. VPC peering enables private connectivity between two VPCs without requiring gateways, VPN connections, or dedicated network connections. The Lambda function can use the VPC peering connection to mount the EFS file system as a local file system and access the files as needed. The solution does not incur additional data transfer or storage costs, and it leverages the existing EFS file system without duplicating or moving the data. Option A is not cost-effective because it requires creating a new EFS file system and using AWS DataSync to copy the data from the original EFS file system. This would incur additional storage and data transfer costs, and it would not provide real-time access to the files. Option C is not scalable because it requires creating a second Lambda function in the secondary account and configuring cross-account permissions to invoke it from the primary account. This would add complexity and latency to the solution, and it would increase the Lambda invocation costs. Option D is not feasible because Lambda layers are not designed to store large amounts of data or provide file system access. Lambda layers are used to share common code or libraries across multiple Lambda functions. Moving the contents of the EFS file system to a Lambda layer would exceed the size limit of 250 MB for a layer, and it would not allow the Lambda function to read or write files to the layer.
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load Balancer (ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests asynchronously. The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products What should a solutions architect recommend to ensure that all the requests are processed successfully?
A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic.
C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle.
D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SOS) queue to receive requests from the website for later processing by the EC2 instances.
Explanation: This option is the most efficient because it uses Amazon CloudFront, which is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users1. It also uses a CloudFront distribution for the static content, which reduces the load on the EC2 instances and improves the performance and availability of the website. It also uses an Auto Scaling group to launch new instances based on network traffic, which automatically adjusts the compute capacity of your EC2 instances based on load or a schedule2. This solution meets the requirement of ensuring that all the requests are processed successfully during events for the launch of new products. Option A is less efficient because it uses a CloudFront distribution for the dynamic content, which is not necessary as the dynamic content is already handled by the API on the EC2 instances. It also increases the number of EC2 instances to handle the increase in traffic, which could incur higher costs and complexity than using an Auto Scaling group. Option C is less efficient because it uses an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle, which is a way to provide a fully managed in-memory data store service that provides sub-millisecond latency for caching and data processing3. However, this could introduce additional complexity and latency, and does not scale automatically based on network traffic. Option D is less efficient because it uses an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances, which is a way to send, store, and receive messages between software components at any volume. However, this does not provide a faster response time to the users as they have to wait for their requests to be processed by the EC2 instances.
A company has an On-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred. Which solution meets these requirements?
A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball S3 endpoint to provide local access to the data.
B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3.Use the Snowball Edge file interface to provide on-premises systems with local access to the data.
C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software application on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.
D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage software application on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.
Explanation: This option is the most efficient because it uses AWS Storage Gateway, which is a service that connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure1. It also uses a stored volume gateway, which is a type of volume gateway that stores your primary data locally and asynchronously backs up point-in-time snapshots of your data to Amazon S32. It also runs the Storage Gateway software application on premises and maps the gateway storage volumes to on-premises storage, which enables you to use your existing storage hardware and network infrastructure. It also mounts the gateway storage volumes to provide local access to the data, which ensures that your data is available for low latency access on premises while also getting backed up to AWS. This solution meets the requirement of maintaining local access to all the data while it is backed up on AWS and ensuring that the data backed up on AWS is automatically and securely transferred. Option A is less efficient because it uses AWS Snowball, which is a physical device that lets you transfer large amounts of data into and out of AWS3. However, this does not provide a periodic backup solution, as it requires manual handling and shipping of the device. It also configures on-premises systems to mount the Snowball S3 endpoint to provide local access to the data, which could introduce additional complexity and latency. Option B is less efficient because it uses AWS Snowball Edge, which is a physical device that has onboard storage and compute capabilities for select AWS capabilities. However, this does not provide a periodic backup solution, as it requires manual handling and shipping of the device. It also uses the Snowball Edge file interface to provide on-premises systems with local access to the data, which could introduce additional complexity and latency. Option C is less efficient because it uses AWS Storage Gateway and configures a cached volume gateway, which is a type of volume gateway that stores your primary data in Amazon S3 and retains a copy of frequently accessed data subsets locally. However, this does not provide local access to all the data, as only some data subsets are cached locally. It also configures a percentage of data to cache locally, which could incur higher costs and complexity than using a stored volume gateway.
A company hosts rts sialic website by using Amazon S3 The company wants to add a contact form to its webpage The contact form will have dynamic server-sKle components for users to input their name, email address, phone number and user message The company anticipates that there will be fewer than 100 site visits each month Which solution will meet these requirements MOST cost-effectively?
A. Host a dynamic contact form page in Amazon Elastic Container Service (Amazon ECS) Set up Amazon Simple Email Service (Amazon SES) to connect to any third-party email provider.
B. Create an Amazon API Gateway endpoinl with an AWS Lambda backend that makes a call to Amazon Simple Email Service (Amazon SES)
C. Convert the static webpage to dynamic by deploying Amazon Ughtsail Use client-side scnpting to build the contact form Integrate the form with Amazon WorkMail
D. Create a Q micro Amazon EC2 instance Deploy a LAMP (Linux Apache MySQL. PHP/Perl/Python) stack to host the webpage Use client-side scripting to buiW the contact form Integrate the form with Amazon WorkMail
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The instances run in an Auto Scaling group behind an Application Load Balancer (ALB) All ecommerce data is stored in an Amazon RDS for ManaDB Multi-AZ DB instance The company wants to optimize customer session management during transactions The application must store session data durably Which solutions will meet these requirements? (Select TWO )
A. Turn on the sticky sessions feature (session affinity) on the ALB
B. Use an Amazon DynamoDB table to store customer session information
C. Deploy an Amazon Cognito user pool to manage user session information
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information
E. Use AWS Systems Manager Application Manager in the application to manage user session information
A company uses Amazon S3 as its data lake. The company has a new partner that must use SFTP to upload data files A solutions architect needs to implement a highly available SFTP solution that minimizes operational overhead. Which solution will meet these requirements?
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publicly accessible endpoint Choose the S3 data lake as the destination
B. Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpoint URL to the new partner Share the S3 File Gateway endpoint with the new partner
C. Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partner to upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2 instance to upload files to the S3 data lake
D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network Load Balancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instances to upload files to the S3 data lake.
Explanation: This option is the most cost-effective and simple way to enable SFTP access to the S3 data lake. AWS Transfer Family is a fully managed service that supports secure file transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabled server with a public endpoint and associate it with your S3 bucket. You can also use AWS Identity and Access Management (IAM) roles and policies to control access to your S3 data lake. The service scales automatically to handle any volume of file transfers and provides high availability and durability. You do not need to provision, manage, or patch any servers or load balancers.
Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is a hybrid cloud storage service that provides a local file system interface to S3. You can use it to store and retrieve files as objects in S3 using standard file protocols such as NFS and SMB. However, it does not support SFTP protocol, and it requires deploying a file gateway appliance on-premises or on EC2.
Option C is not cost-effective or scalable because it requires launching and managing an EC2 instance in a private subnet and setting up a VPN connection for the new partner. This would incur additional costs for the EC2 instance, the VPN connection, and the data transfer. It would also introduce complexity and security risks to the solution. Moreover, it would require running a cron job script on the EC2 instance to upload files to the S3 data lake, which is not efficient or reliable.
Option D is not cost-effective or scalable because it requires launching and managing multiple EC2 instances in a private subnet and placing a NLB in front of them. This would incur additional costs for the EC2 instances, the NLB, and the data transfer. It would also introduce complexity and security risks to the solution. Moreover, it would require running a cron job script on the EC2 instances to upload files to the S3 data lake, which is not efficient or reliable.
A company is using AWS to design a web application that will process insurance quotes Users will request quotes from the application Quotes must be separated by quote type, must be responded to within 24 hours, and must not get lost The solution must maximize operational efficiency and must minimize maintenance. Which solution meets these requirements?
A. Create multiple Amazon Kinesis data streams based on the quote type Configure the web application to send messages to the proper data stream Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data stream
B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type Subscribe the Lambda function to its associated SNS topic Configure the application to publish requests tot quotes to the appropriate SNS topic
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type Configure each backend application server to use its own SQS queue
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasucsearch Service (Amazon ES) cluster Configure the application to send messages to the proper delivery stream Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly
A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations. The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The game's user base is increasing rapidly. What should a solutions architect do to improve the performance of the data tier?
A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
B. Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.
D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
Explanation: The solution that will improve the performance of the data tier is to deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance and modify the game to use Redis. This solution will enable the game to store and retrieve the location data of the players in a fast and scalable way, as Redis is an in-memory data store that supports geospatial data types and commands. By using ElastiCache for Redis, the game can reduce the load on the RDS for PostgreSQL DB instance, which is not optimized for high-frequency updates and queries of location data. ElastiCache for Redis also supports replication, sharding, and auto scaling to handle the increasing user base of the game. The other solutions are not as effective as the first one because they either do not improve the performance, do not support geospatial data, or do not leverage caching. Taking a snapshot of the existing DB instance and restoring it with Multi-AZ enabled will not improve the performance of the data tier, as it only provides high availability and durability, but not scalability or low latency. Migrating from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards will not improve the performance of the data tier, as OpenSearch Service is mainly designed for full-text search and analytics, not for real-time location tracking. OpenSearch Service also does not support geospatial data types and commands natively, unlike Redis. Deploying Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance and modifying the game to use DAX will not improve the performance of the data tier, as DAX is only compatible with DynamoDB, not with RDS for PostgreSQL. DAX also does not support geospatial data types and commands.
A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiples speaker recognition and generates transcript files The company wants to query the transcript files to analyze the business patterns The transcript files must be stored for 7 years for auditing piloses. Which solution will meet these requirements?
A. Use Amazon Recognition for multiple speaker recognition. Store the transcript files in Amazon S3 Use machine teaming models for transcript file analysis
B. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysts
C. Use Amazon Translate lor multiple speaker recognition. Store the transcript files in Amazon Redshift Use SQL queues lor transcript file analysis
D. Use Amazon Recognition for multiple speaker recognition. Store the transcript files in Amazon S3 Use Amazon Textract for transcript file analysis
Explanation: Amazon Transcribe now supports speaker labeling for streaming transcription. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to convert speech-to-text. In live audio transcription, each stream of audio may contain multiple speakers. Now you can conveniently turn on the ability to label speakers, thus helping to identify who is saying what in the output transcript.
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store is data and wants to bu4d a new service that sends an alert to the managers of four Internal teams every time a new weather event is recorded. The company does not want true new service to affect the performance of the current application What should a solutions architect do to meet these requirement with the LEAST amount of operational overhead?
A. Use DynamoDB transactions to write new event data to the table Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a mingle Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon SOS) queue to which the teams can subscribe.
A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers that many requests to the table are not returning the latest data. The company's users have not reported any other issues with database performance. Latency is in an acceptable range. Which design change should the solutions architect recommend?
A. Add read replicas to the table
B. Use a global secondary index (GSI).
C. Request strongly consistent reads for the table.
D. Request eventually consistent reads for the table.
Explanation: The most suitable design change for the company’s application is to request
strongly consistent reads for the table. This change will ensure that the requests to the
table return the latest data, reflecting the updates from all prior write operations.
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. DynamoDB supports two types of read
consistency: eventually consistent reads and strongly consistent reads. By default,
DynamoDB uses eventually consistent reads, unless users specify otherwise1.
Eventually consistent reads are reads that may not reflect the results of a recently
completed write operation. The response might not include the changes because of the
latency of propagating the data to all replicas. If users repeat their read request after a
short time, the response should return the updated data. Eventually consistent reads are
suitable for applications that do not require up-to-date data or can tolerate eventual
consistency1.
Strongly consistent reads are reads that return a result that reflects all writes that received
a successful response prior to the read. Users can request a strongly consistent read by
setting the ConsistentRead parameter to true in their read operations, such as GetItem,
Query, or Scan. Strongly consistent reads are suitable for applications that require up-todate
data or cannot tolerate eventual consistency1.
The other options are not correct because they do not address the issue of read
consistency or are not relevant for the use case. Adding read replicas to the table is not
correct because this option is not supported by DynamoDB. Read replicas are copies of a
primary database instance that can serve read-only traffic and improve availability and
performance. Read replicas are available for some relational database services, such as
Amazon RDS or Amazon Aurora, but not for DynamoDB2. Using a global secondary index
(GSI) is not correct because this option is not related to read consistency. A GSI is an
index that has a partition key and an optional sort key that are different from those on the
base table. A GSI allows users to query the data in different ways, with eventual
consistency3. Requesting eventually consistent reads for the table is not correct because
this option is already the default behavior of DynamoDB and does not solve the problem of
requests not returning the latest data.
A company hosts a multiplayer gaming application on AWS. The company wants the application to read data with sub-millisecond latency and run one-time queries on historical data. Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
B. Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-term storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena
C. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
D. Use Amazon DynamoDB for data that is frequently accessed Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.
Explanation: As they would like to retrieve the data with sub-millisecond, DynamoDB with DAX is the answer. DynamoDB supports some of the world's largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage.
Page 38 out of 81 Pages |
Previous |