Topic 3, Exam Pool C
A company has launched an Amazon RDS for MySQL D6 instance Most of the connections to the database come from serverless applications. Application traffic to the database changes significantly at random intervals At limes of high demand, users report that their applications experience database connection rejection errors. Which solution will resolve this issue with the LEAST operational overhead?
A. Create a proxy in RDS Proxy Configure the users' applications to use the DB instance through RDS Proxy
B. Deploy Amazon ElastCache for Memcached between the users' application and the DB instance
C. Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users' applications to use the new DB instance.
D. Configure Multi-AZ for the DB instance Configure the users' application to switch between the DB instances.
Explanation: Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability.
A research company runs experiments that are powered by a simu-lation application and a visualization application. The simu-lation application runs on Linux and outputs intermediate data to an NFS share every 5 minutes. The visualization application is a Windows desktop application that displays the simu-lation output and requires an SMB file system. The company maintains two synchronized file systems. This strategy is causing data duplication and inefficient resource usage. The company needs to migrate the applications to AWS without making code changes to either application. Which solution will meet these requirements?
A. Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications
B. Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
C. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.
D. Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon FSx for NetApp ONTAP for storage.
Explanation: This solution will meet the requirements because Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, and feature-rich file storage built on NetApp’s popular ONTAP file system. FSx for ONTAP supports both NFS and SMB protocols, which means it can be accessed by both Linux and Windows applications without code changes. FSx for ONTAP also eliminates data duplication and inefficient resource usage by automatically tiering infrequently accessed data to a lowercost storage tier and providing storage efficiency features such as deduplication and compression. FSx for ONTAP also integrates with other AWS services such as Amazon S3, AWS Backup, and AWS CloudFormation. By migrating the applications to Amazon EC2 instances, the company can leverage the scalability, security, and performance of AWS compute resources.
A development team has launched a new application that is hosted on Amazon EC2 instances inside a development VPC. A solution architect needs to create a new VPC in the same account. The new VPC will be peered with the development VPC. The VPC CIDR block for the development VPC is 192. 168. 00/24. The solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering connection to the development VPC. What is the SMALLEST CIOR block that meets these requirements?
A. 10.0.1.0/32
B. 192.168.0.0/24
C. 192.168.1.0/32
D. 10.0.1.0/24
A company has multiple AWS accounts with applications deployed in the us-west-2 Region Application logs are stored within Amazon S3 buckets in each account The company wants to build a centralized log analysis solution that uses a single S3 bucket Logs must not leave us-west-2, and the company wants to incur minimal operational overhead Which solution meets these requirements and is MOST cost-effective?
A. Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket
B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2 Use this S3 bucket for log analysis.
C. Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2 Use this S3 bucket for log analysis.
D. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3 ObjectCreated a event) Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
Explanation: This solution meets the following requirements:
It is cost-effective, as it only charges for the storage and data transfer of the
replicated objects, and does not require any additional AWS services or custom
scripts. S3 Same-Region Replication (SRR) is a feature that automatically
replicates objects across S3 buckets within the same AWS Region. SRR can help
you aggregate logs from multiple sources to a single destination for analysis and
auditing. SRR also preserves the metadata, encryption, and access control of the
source objects.
It is operationally efficient, as it does not require any manual intervention or
scheduling. SRR replicates objects as soon as they are uploaded to the source
bucket, ensuring that the destination bucket always has the latest log data. SRR
also handles any updates or deletions of the source objects, keeping the
destination bucket in sync. SRR can be enabled with a few clicks in the S3 console
or with a simple API call.
It is secure, as it does not allow the logs to leave the us-west-2 Region. SRR only
replicates objects within the same AWS Region, ensuring that the data sovereignty
and compliance requirements are met. SRR also supports encryption of the source
and destination objects, using either server-side encryption with AWS KMS or S3-
managed keys, or client-side encryption.
A company is launching an application on AWS. The application uses an Application Load (ALB) to direct traffic to at least two Amazon EC2 instances in a single target group. The instances are in an Auto Scaling group for each environment. The company requires a development and a production environment. The production environment will have periods of high traffic. Which solution will configure the development environment MOST cost-effectively?
A. Reconfigure the target group in the development environment to have one EC2 instance as a target.
B. Change the ALB balancing algorithm to least outstanding requests.
C. Reduce the size of the EC2 instances in both environments.
D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group
Explanation: This option will configure the development environment in the most cost- effective way as it reduces the number of instances running in the development environment and therefore reduces the cost of running the application. The development environment typically requires less resources than the production environment, and it is unlikely that the development environment will have periods of high traffic that would require a large number of instances. By reducing the maximum number of instances in the development environment's Auto Scaling group, the company can save on costs while still maintaining a functional development environment.
A company is moving its data management application to AWS. The company wants to transition to an event-driven architecture. The architecture needs to the more distributed and to use serverless concepts whit performing the different aspects of the workflow. The company also wants to minimize operational overhead. Which solution will meet these requirements?
A. Build out the workflow in AWS Glue Use AWS Glue to invoke AWS Lambda functions to process the workflow slaps
B. Build out the workflow in AWS Step Functions Deploy the application on Amazon EC2 Instances Use Step Functions to invoke the workflow steps on the EC2 instances
C. Build out the workflow in Amazon EventBridge. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow steps.
D. Build out the workflow m AWS Step Functions Use Step Functions to create a stale machine Use the stale machine to invoke AWS Lambda functions to process the workflow steps
Explanation: This answer is correct because it meets the requirements of transitioning to an event-driven architecture, using serverless concepts, and minimizing operational overhead. AWS Step Functions is a serverless service that lets you coordinate multiple AWS services into workflows using state machines. State machines are composed of tasks and transitions that define the logic and order of execution of the workflow steps. AWS Lambda is a serverless function-as-a-service platform that lets you run code without provisioning or managing servers. Lambda functions can be invoked by Step Functions as tasks in a state machine, and can perform different aspects of the data management workflow, such as data ingestion, transformation, validation, and analysis. By using Step Functions and Lambda, the company can benefit from the following advantages: Event-driven: Step Functions can trigger Lambda functions based on events, such as timers, API calls, or other AWS service events. Lambda functions can also emit events to other services or state machines, creating an event-driven architecture. Serverless: Step Functions and Lambda are fully managed by AWS, so the company does not need to provision or manage any servers or infrastructure. The company only pays for the resources consumed by the workflows and functions, and can scale up or down automatically based on demand. Operational overhead: Step Functions and Lambda simplify the development and deployment of workflows and functions, as they provide built-in features such as monitoring, logging, tracing, error handling, retry logic, and security. The company can focus on the business logic and data processing rather than the operational details.
A rapidly growing ecommerce company is running its workloads in a single AWS Region. A solutions architect must create a disaster recovery (DR) strategy that includes a different AWS Region The company wants its database to be up to date in the DR Region with the least possible latency The remaining infrastructure in the DR Region needs to run at reduced capacity and must be able to scale up it necessary Which solution will meet these requirements with the LOWEST recovery time objective (RTO)?
A. Use an Amazon Aurora global database with a pilot light deployment
B. Use an Amazon Aurora global database with a warm standby deployment
C. Use an Amazon RDS Multi-AZ DB instance with a pilot light deployment
D. Use an Amazon RDS Multi-AZ DB instance with a warm standby deployment
A solutions architect is designing a multi-tier application for a company. The application's users upload images from a mobile device. The application generates a thumbnail of each image and returns a message to the user to confirm that the image was uploaded successfully. The thumbnail generation can take up to 60 seconds, but the company wants to provide a faster response time to its users to notify them that the original image was received. The solutions architect must design the application to asynchronously dispatch requests to the different application tiers. What should the solutions architect do to meet these requirements?
A. Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to invoke the Lambda function.
B. Create an AWS Step Functions workflow Configure Step Functions to handle the orchestration between the application tiers and alert the user when thumbnail generation is complete
C. Create an Amazon Simple Queue Service (Amazon SQS) message queue. As images are uploaded, place a message on the SQS queue for thumbnail generation. Alert the user through an application message that the image was received
D. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions Use one subscription with the application to generate the thumbnail after the image upload is complete. Use a second subscription to message the user's mobile app by way of a push notification after thumbnail generation is complete.
Explanation: This option is the most efficient because it uses Amazon SQS, which is a fully managed message queuing service that lets you send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available1. It also uses an SQS message queue to asynchronously dispatch requests to the different application tiers, which decouples the image upload process from the thumbnail generation process and enables scalability and reliability. It also alerts the user through an application message that the image was received, which provides a faster response time to the user than waiting for the thumbnail generation to complete. Option A is less efficient because it uses a custom AWS Lambda function to generate the thumbnail and alert the user, which is a way to run code without provisioning or managing servers. However, this does not use an asynchronous dispatch mechanism to separate the image upload process from the thumbnail generation process. It also uses the image upload process as an event source to invoke the Lambda function, which could cause concurrency issues if there are many images uploaded at once. Option B is less efficient because it uses AWS Step Functions, which is a fully managed service that provides a graphical console to arrange and visualize the components of your application as a series of steps2. However, this does not use an asynchronous dispatch mechanism to separate the image upload process from the thumbnail generation process. It also uses Step Functions to handle the orchestration between the application tiers and alert the user when thumbnail generation is complete, which could introduce additional complexity and latency. Option D is less efficient because it uses Amazon SNS, which is a fully managed messaging service that enables you to send messages or notifications directly to users with SMS text messages or email3. However, this does not use an asynchronous dispatch mechanism to separate the image upload process from the thumbnail generation process. It also uses SNS notification topics and subscriptions to generate the thumbnail after the image upload is complete and message the user’s mobile app by way of a push notification after thumbnail generation is complete, which could introduce additional complexity and latency.
A company is designing a web application on AWS The application will use a VPN connection between the company's existing data centers and the company's VPCs. The company uses Amazon Route 53 as its DNS service. The application must use private DNS records to communicate with the on-premises services from a VPC. Which solution will meet these requirements in the MOST secure manner?
A. Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with the VPC
B. Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with the VPC.
C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
D. Create a Route 53 public hosted zone. Create a record for each service to allow service communication.
Explanation: To meet the requirements of the web application in the most secure manner, the company should create a Route 53 Resolver outbound endpoint, create a resolver rule, and associate the resolver rule with the VPC. This solution will allow the application to use private DNS records to communicate with the on-premises services from a VPC. Route 53 Resolver is a service that enables DNS resolution between on-premises networks and AWS VPCs. An outbound endpoint is a set of IP addresses that Resolver uses to forward DNS queries from a VPC to resolvers on an on-premises network. A resolver rule is a rule that specifies the domain names for which Resolver forwards DNS queries to the IP addresses that you specify in the rule. By creating an outbound endpoint and a resolver rule, and associating them with the VPC, the company can securely resolve DNS queries for the on-premises services using private DNS records12. The other options are not correct because they do not meet the requirements or are not secure. Creating a Route 53 Resolver inbound endpoint, creating a resolver rule, and associating the resolver rule with the VPC is not correct because this solution will allow DNS queries from on-premises networks to access resources in a VPC, not vice versa. An inbound endpoint is a set of IP addresses that Resolver uses to receive DNS queries from resolvers on an on-premises network1. Creating a Route 53 private hosted zone and associating it with the VPC is not correct because this solution will only allow DNS resolution for resources within the VPC or other VPCs that are associated with the same hosted zone. A private hosted zone is a container for DNS records that are only accessible from one or more VPCs3. Creating a Route 53 public hosted zone and creating a record for each service to allow service communication is not correct because this solution will expose the on-premises services to the public internet, which is not secure. A public hosted zone is a container for DNS records that are accessible from anywhere on the internet3.
A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the company's operations. The company wants to prevent this type of disruption in the future. Which solution will meet this requirement with the LEAST operational overhead?
A. Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create an AWS Lambda function to automatically restore deleted DynamoDB tables.
B. Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
C. Configure deletion protection on the DynamoDB tables.
D. Enable point-in-time recovery on the DynamoDB tables
Explanation: Deletion protection is a feature of DynamoDB that prevents accidental deletion of tables. When deletion protection is enabled, you cannot delete a table unless you explicitly disable it first. This adds an extra layer of security and reduces the risk of data loss and operational disruption. Deletion protection is easy to enable and disable using the AWS Management Console, the AWS CLI, or the DynamoDB API. This solution has the least operational overhead, as you do not need to create, manage, or invoke any additional resources or services.
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture includes an Amazon Aurora database cluster that extends across multiple Availability Zones. The company wants to expand globally and to ensure that its application has minimal downtime.
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region. Use an Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
B. Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-Region Aurara Replica in the second Region. Use Amazon Route 53 health checks with a failovers routing policy to the second Region, Promote the secondary to primary as needed.
C. Deploy the web tier and the applicatin tier to a second Region. Create an Aurora PostSQL database in the second Region. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
Explanation: This option is the most efficient because it deploys the web tier and the application tier to a second Region, which provides high availability and redundancy for the application. It also uses an Amazon Aurora global database, which is a feature that allows a single Aurora database to span multiple AWS Regions1. It also deploys the database in the primary Region and the second Region, which provides low latency global reads and fast recovery from a Regional outage. It also uses Amazon Route 53 health checks with a failover routing policy to the second Region, which provides data protection by routing traffic to healthy endpoints in different Regions2. It also promotes the secondary to primary as needed, which provides data consistency by allowing write operations in one of the Regions at a time3. This solution meets the requirement of expanding globally and ensuring that its application has minimal downtime. Option A is less efficient because it extends the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region, which could incur higher costs and complexity than deploying them separately. It also uses an Aurora global database to deploy the database in the primary Region and the second Region, which is correct. However, it does not use Amazon Route 53 health checks with a failover routing policy to the second Region, which could result in traffic being routed to unhealthy endpoints. Option B is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also adds an Aurora PostgreSQL cross-Region Aurora Replica in the second Region, which provides read scalability across Regions. However, it does not use an Aurora global database, which provides faster replication and recovery than cross-Region replicas. It also uses Amazon Route 53 health checks with a failover routing policy to the second Region, which is correct. However, it does not promote the secondary to primary as needed, which could result in data inconsistency or loss. Option C is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also creates an Aurora PostgreSQL database in the second Region, which provides data redundancy across Regions. However, it does not use an Aurora global database or cross-Region replicas, which provide faster replication and recovery than creating separate databases. It also uses AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Region, which provides data migration between different sources and targets. However, it does not use an Aurora global database or cross-Region replicas, which provide faster replication and recovery than using AWS DMS. It also uses Amazon Route 53 health checks with a failover routing policy to the second Region, which is correct.
A company is using an Application Load Balancer (ALB) to present its application to the internet. The company finds abnormal traffic access patterns across the application. A solutions architect needs to improve visibility into the infrastructure to help the company understand these abnormalities better. What is the MOST operationally efficient solution that meets these requirements?
A. Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information
B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
C. Enable ALB access logging to Amazon S3 Open each file in a text editor, and search each line for the relevant information
D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic access log information.
Explanation: This solution meets the requirements because it allows the company to improve visibility into the infrastructure by using ALB access logging and Amazon Athena. ALB access logging is a feature that captures detailed information about requests sent to the load balancer, such as the client’s IP address, request path, response code, and latency. By enabling ALB access logging to Amazon S3, the company can store the access logs in an S3 bucket as compressed files. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. By creating a table in Amazon Athena for the access logs, the company can query the logs and get results in seconds. This way, the company can better understand the abnormal traffic access patterns across the application.
Page 39 out of 81 Pages |
Previous |