Topic 2: Exam Pool B
A company runs an application on a fleet of Amazon EC2 instances that are in private
subnets behind an internet-facing Application Load Balancer (ALB). The ALB is the origin for an Amazon CloudFront distribution. An AWS WAF web ACL that contains various AWS
managed rules is associated with the CloudFront distribution.
The company needs a solution that will prevent internet traffic from directly accessing the
ALB.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new web ACL that contains the same rules that the existing web ACL contains. Associate the new web ACL with the ALB.
B. Associate the existing web ACL with the ALB.
C. Add a security group rule to the ALB to allow traffic from the AWS managed prefix list for CloudFront only.
D. Add a security group rule to the ALB to allow only the various CloudFront IP address ranges.
A company is running an application that uses an Amazon ElastiCache for Redis cluster as
a caching layer A recent security audit revealed that the company has configured
encryption at rest for ElastiCache However the company did not configure ElastiCache to
use encryption in transit Additionally, users can access the cache without authentication
A solutions architect must make changes to require user authentication and to ensure that
the company is using end-to-end encryption
Which solution will meet these requirements?
A. Create an AUTH token Store the token in AWS System Manager Parameter Store, as an encrypted parameter Create a new cluster with AUTH and configure encryption in transit Update the application to retrieve the AUTH token from Parameter Store when necessary and to use the AUTH token for authentication
B. Create an AUTH token Store the token in AWS Secrets Manager Configure the existing cluster to use the AUTH token and configure encryption in transit Update the application to retrieve the AUTH token from Secrets Manager when necessary and to use the AUTH token for authentication.
C. Create an SSL certificate Store the certificate in AWS Secrets Manager Create a new cluster and configure encryption in transit Update the application to retrieve the SSL certificate from Secrets Manager when necessary and to use the certificate for authentication.
D. Create an SSL certificate Store the certificate in AWS Systems Manager Parameter Store, as an encrypted advanced parameter Update the existing cluster to configure encryption in transit Update the application to retrieve the SSL certificate from Parameter Store when necessary and to use the certificate for authentication
Explanation: Creating an AUTH token and storing it in AWS Secrets Manager and
configuring the existing cluster to use the AUTH token and configure encryption in transit,
and updating the application to retrieve the AUTH token from Secrets Manager when
necessary and to use the AUTH token for authentication, would meet the requirements for
user authentication and end-to-end encryption.
AWS Secrets Manager is a service that enables you to easily rotate, manage, and retrieve
database credentials, API keys, and other secrets throughout their lifecycle. Secrets
Manager also enables you to encrypt the data and ensure that only authorized users and
applications can access it.
By configuring the existing cluster to use the AUTH token and encryption in transit, all data
will be encrypted as it is sent over the network, providing additional security for the data
stored in ElastiCache.
Additionally, by updating the application to retrieve the AUTH token from Secrets Manager
when necessary and to use the AUTH token for authentication, it ensures that only
authorized users and applications can access the cache.
A company uses AWS Organizations for a multi-account setup in the AWS Cloud. The
company's finance team has a data processing application that uses AWS Lambda and Amazon DynamoDB. The company's marketing team wants to access the data that is
stored in the DynamoDB table.
The DynamoDB table contains confidential data. The marketing team can have access to
only specific attributes of data in the DynamoDB table. The fi-nance team and the
marketing team have separate AWS accounts.
What should a solutions architect do to provide the marketing team with the appropriate
access to the DynamoDB table?
A. Create an SCP to grant the marketing team's AWS account access to the specific attributes of the DynamoDB table. Attach the SCP to the OU of the finance team.
B. Create an IAM role in the finance team's account by using IAM policy conditions for specific DynamoDB attributes (fine-grained access con-trol). Establish trust with the marketing team's account. In the mar-keting team's account, create an IAM role that has permissions to as-sume the IAM role in the finance team's account.
C. Create a resource-based IAM policy that includes conditions for spe-cific DynamoDB attributes (fine-grained access control). Attach the policy to the DynamoDB table. In the marketing team's account, create an IAM role that has permissions to access the DynamoDB table in the finance team's account.
D. Create an IAM role in the finance team's account to access the Dyna-moDB table. Use an IAM permissions boundary to limit the access to the specific attributes. In the marketing team's account, create an IAM role that has permissions to assume the IAM role in the finance team's account.
Explanation: The company should create a resource-based IAM policy that includes conditions for specific DynamoDB attributes (fine-grained access control). The company should attach the policy to the DynamoDB table. In the marketing team’s account, the company should create an IAM role that has permissions to access the DynamoDB table in the finance team’s account. This solution will meet the requirements because a resource-based IAM policy is a policy that you attach to an AWS resource (such as a DynamoDB table) to control who can access that resource and what actions they can perform on it. You can use IAM policy conditions to specify fine-grained access control for DynamoDB items and attributes. For example, you can allow or deny access to specific attributes of all items in a table by matching on attribute names1. By creating a resource-based policy that allows access to only specific attributes of the DynamoDB table and attaching it to the table, the company can restrict access to confidential data. By creating an IAM role in the marketing team’s account that has permissions to access the DynamoDB table in the finance team’s account, the company can enable cross-account access.
A solutions architect must provide a secure way for a team of cloud engineers to use the
AWS CLI to upload objects into an Amazon S3 bucket Each cloud engineer has an IAM
user. IAM access keys and a virtual multi-factor authentication (MFA) device The IAM
users for the cloud engineers are in a group that is named S3-access The cloud engineers
must use MFA to perform any actions in Amazon S3
Which solution will meet these requirements?
A. Attach a policy to the S3 bucket to prompt the 1AM user for an MFA code when the 1AM user performs actions on the S3 bucket Use 1AM access keys with the AWS CLI to call Amazon S3
B. Update the trust policy for the S3-access group to require principals to use MFA when principals assume the group Use 1AM access keys with the AWS CLI to call Amazon S3
C. Attach a policy to the S3-access group to deny all S3 actions unless MFA is present Use 1AM access keys with the AWS CLI to call Amazon S3
D. Attach a policy to the S3-access group to deny all S3 actions unless MFA is present Request temporary credentials from AWS Security Token Service (AWS STS) Attach the temporary credentials in a profile that Amazon S3 will reference when the user performs actions in Amazon S3
A solutions architect is reviewing a company's process for taking snapshots of Amazon
RDS DB instances. The company takes automatic snapshots every day and retains the
snapshots for 7 days.
The solutions architect needs to recommend a solution that takes snapshots every 6 hours
and retains the snapshots for 30 days. The company uses AWS Organizations to manage
all of its AWS accounts. The company needs a consolidated view of the health of the RDS
snapshots.
Which solution will meet these requirements with the LEAST operational overhead?
A. Turn on the cross-account management feature in AWS Backup. Create a backup plan that specifies the frequency and retention requirements. Add a tag to the DB instances. Apply the backup plan by using tags. Use AWS Backup to monitor the status of the backups.
B. Turn on the cross-account management feature in Amazon RDS. Create a snapshot global policy that specifies the frequency and retention requirements. Use the RDS console in the management account to monitor the status of the backups.
C. Turn on the cross-account management feature in AWS CloudFormation. From the management account, deploy a CloudFormation stack set that contains a backup plan from AWS Backup that specifies the frequency and retention requirements. Create an AWS Lambda function in the management account to monitor the status of the backups. Create an Amazon EventBridge rule in each account to run the Lambda function on a schedule.
D. Configure AWS Backup in each account. Create an Amazon Data Lifecycle Manager lifecycle policy that specifies the frequency and retention requirements. Specify the DB instances as the target resource. Use the Amazon Data Lifecycle Manager console in each member account to monitor the status of the backups.
Explanation: Turning on the cross-account management feature in AWS Backup will enable managing and monitoring backups across multiple AWS accounts that belong to the same organization in AWS Organizations1. Creating a backup plan that specifies the frequency and retention requirements will enable taking snapshots every 6 hours and retaining them for 30 days2. Adding a tag to the DB instances will enable applying the backup plan by using tags2. Using AWS Backup to monitor the status of the backups will enable having a consolidated view of the health of the RDS snapshots1.
A company uses AWS Organizations with a single OU named Production to manage
multiple accounts All accounts are members of the Production OU Administrators use deny
list SCPs in the root of the organization to manage access to restricted services.
The company recently acquired a new business unit and invited the new unit's existing
AWS account to the organization Once onboarded the administrators of the new business
unit discovered that they are not able to update existing AWS Config rules to meet the
company's policies.
Which option will allow administrators to make changes and continue to enforce the current
policies without introducing additional long-term maintenance?
A. Remove the organization's root SCPs that limit access to AWS Config Create AWS Service Catalog products for the company's standard AWS Config rules and deploy them throughout the organization, including the new account.
B. Create a temporary OU named Onboarding for the new account Apply an SCP to the Onboarding OU to allow AWS Config actions Move the new account to the Production OU when adjustments to AWS Config are complete
C. Convert the organization's root SCPs from deny list SCPs to allow list SCPs to allow the required services only Temporarily apply an SCP to the organization's root that allows AWS Config actions for principals only in the new account.
D. Create a temporary OU named Onboarding for the new account Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.
Explanation: An SCP at a lower level can't add a permission after it is blocked by an SCP at a higher level. SCPs can only filter; they never add permissions. SO you need to create a new OU for the new account assign an SCP, and move the root SCP to Production OU. Then move the new account to production OU when AWS config is done.
A company wants to use AWS for disaster recovery for an on-premises application. The
company has hundreds of Windows-based servers that run the application. All the servers
mount a common share.
The company has an RTO of 15 minutes and an RPO of 5 minutes. The solution must
support native failover and fallback capabilities.
Which solution will meet these requirements MOST cost-effectively?
A. Create an AWS Storage Gateway File Gateway. Schedule daily Windows server backups. Save the data lo Amazon S3. During a disaster, recover the on-premises servers from the backup. During failback. run the on-premises servers on Amazon EC2 instances.
B. Create a set of AWS CloudFormation templates to create infrastructure. Replicate all data to Amazon Elastic File System (Amazon EFS) by using AWS DataSync. During a disaster, use AWS CodePipeline to deploy the templates to restore the on-premises servers. Fail back the data by using DataSync.
C. Create an AWS Cloud Development Kit (AWS CDK) pipeline to stand up a multi-site active-active environment on AWS. Replicate data into Amazon S3 by using the s3 sync command. During a disaster, swap DNS endpoints to point to AWS. Fail back the data by using the s3 sync command.
D. Use AWS Elastic Disaster Recovery to replicate the on-premises servers. Replicate data to an Amazon FSx for Windows File Server file system by using AWS DataSync. Mount the file system to AWS servers. During a disaster, fail over the on-premises servers to AWS. Fail back to new or existing servers by using Elastic Disaster Recovery.
A company is implementing a serverless architecture by using AWS Lambda functions that
need to access a Microsoft SQL Server DB instance on Amazon RDS. The company has
separate environments for development and production, including a clone of the database
system.
The company's developers are allowed to access the credentials for the development
database. However, the credentials for the production database must be encrypted with a
key that only members of the IT security team's IAM user group can access. This key must
be rotated on a regular basis.
What should a solutions architect do in the production environment to meet these
requirements?
A. Store the database credentials in AWS Systems Manager Parameter Store by using a SecureString parameter that is encrypted by an AWS Key Management Service (AWS KMS) customer managed key. Attach a role to each Lambda function to provide access to the SecureString parameter. Restrict access to the Securestring parameter and the customer managed key so that only the IT security team can access the parameter and the key.
B. Encrypt the database credentials by using the AWS Key Management Service (AWS KMS) default Lambda key. Store the credentials in the environment variables of each Lambda function. Load the credentials from the environment variables in the Lambda code. Restrict access to the KMS key o that only the IT security team can access the key.
C. Store the database credentials in the environment variables of each Lambda function. Encrypt the environment variables by using an AWS Key Management Service (AWS KMS) customer managed key. Restrict access to the customer managed key so that only the IT security team can access the key.
D. Store the database credentials in AWS Secrets Manager as a secret that is associated with an AWS Key Management Service (AWS KMS) customer managed key. Attach a role to each Lambda function to provide access to the secret. Restrict access to the secret and the customer managed key so that only the IT security team can access the secret and the key.
Explanation: Storing the database credentials in AWS Secrets Manager as a secret that is associated with an AWS Key Management Service (AWS KMS) customer managed key will enable encrypting and managing the credentials securely1. AWS Secrets Manager helps you to securely encrypt, store, and retrieve credentials for your databases and other services2. Attaching a role to each Lambda function to provide access to the secret will enable retrieving the credentials programmatically1. Restricting access to the secret and the customer managed key so that only members of the IT security team’s IAM user group can access them will enable meeting the security requirements1.
An education company is running a web application used by college students around the
world. The application runs in an Amazon Elastic Container Service (Amazon ECS) cluster
in an Auto Scaling group behind an Application Load Balancer (ALB). A system
administrator detected a weekly spike in the number of failed logic attempts. Which
overwhelm the application’s authentication service. All the failed login attempts originate
from about 500 different IP addresses that change each week. A solutions architect must
prevent the failed login attempts from overwhelming the authentication service.
Which solution meets these requirements with the MOST operational efficiency?
A. Use AWS Firewall Manager to create a security group and security group policy to deny access from the IP addresses.
B. Create an AWS WAF web ACL with a rate-based rule, and set the rule action to Block. Connect the web ACL to the ALB.
C. Use AWS Firewall Manager to create a security group and security group policy to allow access only to specific CIDR ranges.
D. Create an AWS WAF web ACL with an IP set match rule, and set the rule action to Block. Connect the web ACL to the ALB.
A company has deployed its database on an Amazon RDS for MySQL DB instance in the
us-east-1 Region. The company needs to make its data available to customers in Europe.
The customers in Europe must have access to the same data as customers in the United
States (US) and will not tolerate high application latency or stale data. The customers in
Europe and the customers in the US need to write to the database. Both groups of
customers need to see updates from the other group in real time.
Which solution will meet these requirements?
A. Create an Amazon Aurora MySQL replica of the RDS for MySQL DB instance. Pause application writes to the RDS DB instance. Promote the Aurora Replica to a standalone DB cluster. Reconfigure the application to use the Aurora database and resume writes. Add eu-west-1 as a secondary Region to the 06 cluster. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1. Configure the application to use the Aurora MySQL endpoint in eu- west-1.
B. Add a cross-Region replica in eu-west-1 for the RDS for MySQL DB instance. Configure the replica to replicate write queries back to the primary DB instance. Deploy the application in eu-west-1. Configure the application to use the RDS for MySQL endpoint in eu-west-1.
C. Copy the most recent snapshot from the RDS for MySQL DB instance to eu-west-1. Create a new RDS for MySQL DB instance in eu-west-1 from the snapshot. Configure MySQL logical replication from us-east-1 to eu-west-1. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1. Configure the application to use the RDS for MySQL endpoint in eu-west-1.
D. Convert the RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster. Add eu-west-1 as a secondary Region to the DB cluster. Enable write forwarding on the DB cluster. Deploy the application in eu-west-1. Configure the application to use the Aurora MySQL endpoint in eu-west-1.
A solutions architect needs to assess a newly acquired company’s portfolio of applications
and databases. The solutions architect must create a business case to migrate the portfolio
to AWS. The newly acquired company runs applications in an on-premises data center.
The data center is not well documented. The solutions architect cannot immediately
determine how many applications and databases exist. Traffic for the applications is
variable. Some applications are batch processes that run at the end of each month.
The solutions architect must gain a better understanding of the portfolio before a migration
to AWS can begin.
Which solution will meet these requirements?
A. Use AWS Server Migration Service (AWS SMS) and AWS Database Migration Service (AWS DMS) to evaluate migration. Use AWS Service Catalog to understand application and database dependencies.
B. Use AWS Application Migration Service. Run agents on the on-premises infrastructure. Manage the agents by using AWS Migration Hub. Use AWS Storage Gateway to assess local storage needs and database dependencies.
C. Use Migration Evaluator to generate a list of servers. Build a report for a business case. Use AWS Migration Hub to view the portfolio. Use AWS Application Discovery Service to gain an understanding of application dependencies.
D. Use AWS Control Tower in the destination account to generate an application portfolio. Use AWS Server Migration Service (AWS SMS) to generate deeper reports and a business case. Use a landing zone for core accounts and resources.
Explanation: The company should use Migration Evaluator to generate a list of servers and build a report for a business case. The company should use AWS Migration Hub to view the portfolio and use AWS Application Discovery Service to gain an understanding of application dependencies. This solution will meet the requirements because Migration Evaluator is a migration assessment service that helps create a data-driven business case for AWS cloud planning and migration. Migration Evaluator provides a clear baseline of what the company is running today and projects AWS costs based on measured onpremises provisioning and utilization1. Migration Evaluator can use an agentless collector to conduct broad-based discovery or securely upload exports from existing inventory tools2 . Migration Evaluator integrates with AWS Migration Hub, which is a service that provides a single location to track the progress of application migrations across multiple AWS and partner solutions3. Migration Hub also supports AWS Application Discovery Service, which is a service that helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises data centers, their associated dependencies, and their performance profile4.
A company needs to architect a hybrid DNS solution. This solution will use an Amazon
Route 53 private hosted zone for the domain cloud.example.com for the resources stored
within VPCs.
The company has the following DNS resolution requirements:
• On-premises systems should be able to resolve and connect to cloud.example.com.
• All VPCs should be able to resolve cloud.example.com.
There is already an AWS Direct Connect connection between the on-premises corporate
network and AWS Transit Gateway. Which architecture should the company use to meet
these requirements with the HIGHEST performance?
A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
B. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional forwarder.
C. Associate the private hosted zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound resolver.
D. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
Explanation: Amazon Route 53 Resolver is a managed DNS resolver service from Route 53 that helps to create conditional forwarding rules to redirect query traffic1. By associating the private hosted zone to all the VPCs, the solutions architect can enable DNS resolution for cloud.example.com within the VPCs. By creating a Route 53 inbound resolver in the shared services VPC, the solutions architect can enable DNS resolution for cloud.example.com from on-premises systems. By attaching all VPCs to the transit gateway, the solutions architect can enable connectivity between the VPCs and the onpremises network through AWS Direct Connect. By creating forwarding rules in the onpremises DNS server for cloud.example.com that point to the inbound resolver, the solutions architect can direct DNS queries for cloud.example.com to the Route 53 Resolver endpoint in AWS. This solution will provide the highest performance as it leverages Route 53 Resolver’s optimized routing and caching capabilities.
Page 18 out of 41 Pages |
Previous |