Topic 5: Exam Pool E
A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four AWS accounts in the root organizational unit (OU). There are three nonproduction accounts and one production account. The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited types. Which solutions to deploy the SCP will meet these requirements? (Select TWO.)
A. Attach the SCP to the root OU for the organization
B. Attach the SCP to the three nonproduction Organizations member accounts
C. Attach the SCP to the Organizations management account.
D. Create an OU for the production account. Attach the SCP to the OU. Move the production member account into the new OU.
E. Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member accounts into the new OU.
Explanation: SCPs are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines1. To apply an SCP to a specific set of accounts, you need to create an OU for those accounts and attach the SCP to the OU. This way, the SCP affects only the member accounts in that OU and not the other accounts in the organization. If you attach the SCP to the root OU, it will apply to all accounts in the organization, including the production account, which is not the desired outcome. If you attach the SCP to the management account, it will have no effect, as SCPs do not affect users or roles in the management account1. Therefore, the best solutions to deploy the SCP are B and E. Option B attaches the SCP directly to the three nonproduction accounts, while option E creates a separate OU for the nonproduction accounts and attaches the SCP to the OU. Both options will achieve the same result of restricting the EC2 instance types in the nonproduction accounts, but option E might be more scalable and manageable if there are more accounts or policies to be applied in the future2.
A company wants to migrate its on-premises data center to AWS. According to the company's compliance requirements, the company can use only the ap-northeast-3 Region. Company administrators are not permitted to connect VPCs to the internet. Which solutions will meet these requirements? (Choose two.)
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-northeast-3 in the AWS account settings.
C. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining internet access. Deny access to all AWS Regions except ap-northeast-3.
D. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an IAM policy for each user to prevent the use of any AWS Region other than ap-northeast-3.
E. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and alert for new resources deployed outside of ap-northeast-3.
A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each user a personalized view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the world as quickly as possible. How should a solutions architect design the application to ensure the LEAST amount of latency for all users?
A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin.
B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest Region.
C. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content directly from the ALB.
D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in the closest Region.
A security audit reveals that Amazon EC2 instances are not being patched regularly. A solutions architect needs to provide a solution that will run regular security scans across a large fleet of EC2 instances. The solution should also patch the EC2 instances on a regular schedule and provide a report of each instance's patch status. Which solution will meet these requirements?
A. Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up a cron job on each EC2 instance to patch the instance on a regular schedule.
B. Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2 instances for software vulnerabilities. Set up AWS Systems Manager Session Manager to patch the EC2 instances on a regular schedule.
C. Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set up an Amazon EventBridge scheduled rule to patch the EC2 instances on a regular schedule.
D. Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2 instances for software vulnerabilities. Set up AWS Systems Manager Patch Manager to patch the EC2 instances on a regular schedule.
Explanation: Amazon Inspector is an automated security assessment service that helps
improve the security and compliance of applications deployed on AWS. Amazon Inspector
automatically assesses applications for exposure, vulnerabilities, and deviations from best
practices. After performing an assessment, Amazon Inspector produces a detailed list of
security findings prioritized by level of severity1. Amazon Inspector can scan the EC2
instances for software vulnerabilities and provide a report of each instance’s patch status.
AWS Systems Manager Patch Manager is a capability of AWS Systems Manager that
automates the process of patching managed nodes with both security-related updates and
other types of updates. Patch Manager uses patch baselines, which include rules for autoapproving
patches within days of their release, in addition to optional lists of approved and
rejected patches. Patch Manager can patch fleets of Amazon EC2 instances, edge
devices, on-premises servers, and virtual machines (VMs) by operating system type2.
Patch Manager can patch the EC2 instances on a regular schedule and provide a report of
each instance’s patch status. Therefore, the combination of Amazon Inspector and AWS
Systems Manager Patch Manager will meet the requirements of the question.
The other options are not valid because:
Amazon Macie is a security service that uses machine learning to automatically
discover, classify, and protect sensitive data in AWS. Amazon Macie does not
scan the EC2 instances for software vulnerabilities, but rather for data
classification and protection3. A cron job is a Linux command for scheduling a task
to be executed sometime in the future. A cron job is not a reliable way to patch the
EC2 instances on a regular schedule, as it may fail or be interrupted by other
processes4.
Amazon GuardDuty is a threat detection service that continuously monitors for
malicious activity and unauthorized behavior to protect your AWS accounts and
workloads. Amazon GuardDuty does not scan the EC2 instances for software
vulnerabilities, but rather for network and API activity anomalies5. AWS Systems
Manager Session Manager is a fully managed AWS Systems Manager capability
that lets you manage your Amazon EC2 instances, edge devices, on-premises
servers, and virtual machines (VMs) through an interactive one-click browserbased
shell or the AWS Command Line Interface (AWS CLI). Session Manager
does not patch the EC2 instances on a regular schedule, but rather provides
secure and auditable node management2.
Amazon Detective is a security service that makes it easy to analyze, investigate,
and quickly identify the root cause of potential security issues or suspicious
activities. Amazon Detective does not scan the EC2 instances for software
vulnerabilities, but rather collects and analyzes data from AWS sources such as
Amazon GuardDuty, Amazon VPC Flow Logs, and AWS CloudTrail. Amazon
EventBridge is a serverless event bus that makes it easy to connect applications
using data from your own applications, integrated Software-as-a-Service (SaaS)
applications, and AWS services. EventBridge delivers a stream of real-time data
from event sources, such as Zendesk, Datadog, or Pagerduty, and routes that
data to targets like AWS Lambda. EventBridge does not patch the EC2 instances
on a regular schedule, but rather triggers actions based on events.
References: Amazon Inspector, AWS Systems Manager Patch Manager, Amazon
Macie, Cron job, Amazon GuardDuty, [Amazon Detective], [Amazon EventBridge]
A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company's security policy requires that all website traffic be inspected by AWS WAR How should the solutions architect comply with these requirements?
A. Configure an S3 bucket policy lo accept requests coming from the AWS WAF Amazon Resource Name (ARN) only.
B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.
C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront.
D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution.
A global company is using Amazon API Gateway to design REST APIs for its loyalty club users in the us-east-1 Region and the ap-southeast-2 Region. A solutions architect must design a solution to protect these API Gateway managed REST APIs across multiple accounts from SQL injection and cross-site scripting attacks. Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Set up AWS WAF in both Regions. Associate Regional web ACLs with an API stage.
B. Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.
C. Set up AWS Shield in bath Regions. Associate Regional web ACLs with an API stage.
D. Set up AWS Shield in one of the Regions. Associate Regional web ACLs with an API stage.
Explanation: Using AWS WAF has several benefits. Additional protection against web attacks using criteria that you specify. You can define criteria using characteristics of web requests such as the following: Presence of SQL code that is likely to be malicious (known as SQL injection). Presence of a script that is likely to be malicious (known as cross-site scripting). AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of protections.
A company built an application with Docker containers and needs to run the application in the AWS Cloud The company wants to use a managed sen/ice to host the application The solution must scale in and out appropriately according to demand on the individual container services The solution also must not result in additional operational overhead or infrastructure to manage Which solutions will meet these requirements? (Select TWO)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
Explanation: These options are the best solutions because they allow the company to run
the application with Docker containers in the AWS Cloud using a managed service that
scales automatically and does not require any infrastructure to manage. By using AWS
Fargate, the company can launch and run containers without having to provision, configure,
or scale clusters of EC2 instances.
Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed. By using Amazon
ECS or Amazon EKS, the company can choose the container orchestration platform that
suits its needs. Amazon ECS is a fully managed service that integrates with other AWS
services and simplifies the deployment and management of containers. Amazon EKS is a
managed service that runs Kubernetes on AWS and provides compatibility with existing
Kubernetes tools and plugins.
C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the
containers. This option is not feasible because AWS Lambda does not support running
Docker containers directly. Lambda functions are executed in a sandboxed environment
that is isolated from other functions and resources. To run Docker containers on Lambda,
the company would need to use a custom runtime or a wrapper library that emulates the
Docker API, which can introduce additional complexity and overhead.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
This option is not optimal because it requires the company to manage the EC2 instances
that host the containers. The company would need to provision, configure, scale, patch,
and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker
nodes. This option is not ideal because it requires the company to manage the EC2
instances that host the containers. The company would need to provision, configure, scale,
patch, and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
A business's backup data totals 700 terabytes (TB) and is kept in network attached storage (NAS) at its data center. This backup data must be available in the event of occasional regulatory inquiries and preserved for a period of seven years. The organization has chosen to relocate its backup data from its on-premises data center to Amazon Web Services (AWS). Within one month, the migration must be completed. The company's public internet connection provides 500 Mbps of dedicated capacity for data transport. What should a solutions architect do to ensure that data is migrated and stored at the LOWEST possible cost?
A. Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
B. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier.
C. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive.
D. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.
A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices. The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the devices back to AWS. Which solution will meet these requirements?
A. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
B. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre file system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster instances.
C. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from the HPC cluster instances.
D. Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system. Access the FSx for Lustre file system from the HPC cluster instances.
Explanation: To provide the HPC cluster with consistent sub-millisecond latency and highthroughput
access to the data on the Snowball Edge Storage Optimized devices, a
solutions architect should configure an Amazon FSx for Lustre file system, and integrate it
with an Amazon S3 bucket. This solution has the following benefits:
It allows the HPC cluster to access the data on the Snowball Edge devices using a
POSIX-compliant file system that is optimized for fast processing of large
datasets1.
It enables the data to be imported from the Snowball Edge devices into the S3
bucket using the AWS Snow Family Console or the AWS CLI2. The data can then
be accessed from the FSx for Lustre file system using the S3 integration feature3.
It supports high availability and durability of the data, as the FSx for Lustre file
system can automatically copy the data to and from the S3 bucket3. The data can
also be accessed from other AWS services or applications using the S3 API4.
A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience. Which service will improve the performance of both the real-lime and on-demand streaming?
A. Amazon CloudFront
B. AWS Global Accelerator
C. Amazon Route 53
D. Amazon S3 Transfer Acceleration
Explanation: You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin. One way you can set up video workflows in the cloud is by using CloudFront together with AWS Media Services.
A company is developing a mobile game that streams score updates to a backend processor and then posts results on a leaderboard A solutions architect needs to design a solution that can handle large traffic spikes process the mobile game updates in order of receipt, and store the processed updates in a highly available database The company also wants to minimize the management overhead required to maintain the solution What should the solutions architect do to meet these requirements?
A. Push score updates to Amazon Kinesis Data Streams Process the updates in Kinesis Data Streams with AWS Lambda Store the processed updates in Amazon DynamoDB.
B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2 instances set up for Auto Scaling Store the processed updates in Amazon Redshift.
C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic Subscribe an AWS Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database running on Amazon EC2.
D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
Explanation: Amazon Kinesis Data Streams is a scalable and reliable service that can ingest, buffer, and process streaming data in real-time. It can handle large traffic spikes and preserve the order of the incoming data records. AWS Lambda is a serverless compute service that can process the data streams from Kinesis Data Streams without requiring any infrastructure management. It can also scale automatically to match the throughput of the data stream. Amazon DynamoDB is a fully managed, highly available, and fast NoSQL database that can store the processed updates from Lambda. It can also handle high write throughput and provide consistent performance. By using these services, the solutions architect can design a solution that meets the requirements of the company with the least operational overhead.
A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company's growth. A solutions architect must improve the application's infrastructure. Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
A. Migrate the PostgreSQL database to Amazon Aurora
B. Migrate the web application to be hosted on Amazon EC2 instances.
C. Set up an Amazon CloudFront distribution for the web application content.
D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).
Explanation: Amazon Aurora is a fully managed, scalable, and highly available relational database service that is compatible with PostgreSQL. Migrating the database to Amazon Aurora would reduce the operational overhead of maintaining the database infrastructure and allow the company to focus on building and scaling the application. AWS Fargate is a fully managed container orchestration service that enables users to run containers without the need to manage the underlying EC2 instances. By using AWS Fargate with Amazon Elastic Container Service (Amazon ECS), the solutions architect can improve the scalability and efficiency of the web application and reduce the operational overhead of maintaining the underlying infrastructure.
Page 34 out of 81 Pages |
Previous |