Topic 2: Exam Pool B
A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each instance to perform SSL termination. There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute capacity of the web servers to reach their maximum limit. What should a solutions architect do to increase the application's performance?
A. Create a new SSL certificate using AWS Certificate Manager (ACM) install the ACM certificate on each instance
B. Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket Configure the EC2 instances to reference the bucket for SSL termination
C. Create another EC2 instance as a proxy server Migrate the SSL certificate to the new instance and configure it to direct connections to the existing EC2 instances
D. Import the SSL certificate into AWS Certificate Manager (ACM) Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM
A company uses multiple vendors to distribute digital assets that are stored in Amazon S3 buckets The company wants to ensure that its vendor AWS accounts have the minimum access that is needed to download objects in these S3 buckets Which solution will meet these requirements with the LEAST operational overhead?
A. Design a bucket policy that has anonymous read permissions and permissions to list ail buckets.
B. Design a bucket policy that gives read-only access to users. Specify 1AM entities as principals
C. Create a cross-account 1AM role that has a read-only access policy specified for the 1AM role.
D. Create a user policy and vendor user groups that give read-only access to vendor users
Explanation: A cross-account IAM role is a way to grant users from one AWS account access to resources in another AWS account. The cross-account IAM role can have a read-only access policy attached to it, which allows the users to download objects from the S3 buckets without modifying or deleting them. The cross-account IAM role also reduces the operational overhead of managing multiple IAM users and policies in each account. The cross-account IAM role meets all the requirements of the question, while the other options do not.
A company wants to create an application to store employee data in a hierarchical structured relationship. The company needs a minimum-latency response to high-traffic queries for the employee data and must protect any sensitive data. The company also needs to receive monthly email messages if any financial information is present in the employee data. Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)
A. Use Amazon Redshift to store the employee data in hierarchies. Unload the data to Amazon S3 every month.
B. Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month.
C. Configure Amazon fvlacie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda.
D. Use Amazon Athena to analyze the employee data in Amazon S3. Integrate Athena with Amazon QuickSight to publish analysis dashboards and share the dashboards with users.
E. Configure Amazon Macie for the AWS account Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon Simple Notification Service (Amazon SNS) subscription.
Explanation: Generally, for building a hierarchical relationship model, a graph database such as Amazon Neptune is a better choice. In some cases, however, DynamoDB is a better choice for hierarchical data modeling because of its flexibility, security, performance, and scale.
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the What should a solutions architect do to mitigate any single point of failure in this architecture?
A. Add a set of VPNs between the Management and Production VPCs
B. Add a second virtual private gateway and attach it to the Management VPC
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC.
Explanation: This answer is correct because it provides redundancy for the VPN connection between the Management VPC and the data center. If one customer gateway device or one VPN tunnel becomes unavailable, the traffic can still flow over the second customer gateway device and the second VPN tunnel. This way, the single point of failure in the VPN connection is mitigated.
A company needs to store contract documents. A contract lasts for 5 years. During the 5- year period, the company must ensure that the documents cannot be overwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys automatically every year. Which combination of steps should a solutions architect take to meet these requirements with the LEAST operational overhead? (Select TWO.)
A. Store the documents in Amazon S3. Use S3 Object Lock in governance mode.
B. Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.
C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure key rotation.
D. Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Configure key rotation.
E. Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Configure key rotation.
Explanation: Consider using the default aws/s3 KMS key if: You're uploading or accessing S3 objects using AWS Identity and Access Management (IAM) principals that are in the same AWS account as the AWS KMS key. You don't want to manage policies for the KMS key. Consider using a customer managed key if: You want to create, rotate, disable, or define access controls for the key. You want to grant cross-account access to your S3 objects. You can configure the policy of a customer managed key to allow access from another account. https://repost.aws/knowledge-center/s3-object-encryption-keys
A solutions architect has created a new AWS account and must secure AWS account root user access. Which combination of actions will accomplish this? (Choose two.)
A. Ensure the root user uses a strong password.
B. Enable multi-factor authentication to the root user.
C. Store root user access keys in an encrypted Amazon S3 bucket
D. Add the root user to a group containing administrative permissions.
E. Apply the required permissions to the root user with an inline policy document.
Explanation: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html https://docs.aws.amazon.com/accounts/latest/reference/best-practices-root-user.html * Enable AWS multi-factor authentication (MFA) on your AWS account root user. For more information, see Using multi-factor authentication (MFA) in AWS in the IAM User Guide. * Never share your AWS account root user password or access keys with anyone. * Use a strong password to help protect access to the AWS Management Console. For information about managing your AWS account root user password, see Changing the password for the root user.
A company wants to migrate its three-tier application from on premises to AWS. The web tier and the application tier are running on third-party virtual machines (VMs). The database tier is running on MySQL. The company needs to migrate the application by making the fewest possible changes to the architecture. The company also needs a database solution that can restore data to a specific point in time. Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL in private subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL in private subnets.
D. Migrate the web tier and the application tier to Amazon EC2 instances in public subnets. Migrate the database tier to Amazon Aurora MySQL in public subnets.
Explanation: The solution that meets the requirements with the least operational overhead is to migrate the web tier to Amazon EC2 instances in public subnets, migrate the application tier to EC2 instances in private subnets, and migrate the database tier to Amazon RDS for MySQL in private subnets. This solution allows the company to migrate its three-tier application to AWS by making minimal changes to the architecture, as it preserves the same web, application, and database tiers and uses the same MySQL database engine. The solution also provides a database solution that can restore data to a specific point in time, as Amazon RDS for MySQL supports automated backups and pointin- time recovery. This solution also reduces the operational overhead by using managed services such as Amazon EC2 and Amazon RDS, which handle tasks such as provisioning, patching, scaling, and monitoring. The other solutions do not meet the requirements as well as the first one because they either involve more changes to the architecture, do not provide point-in-time recovery, or do not follow best practices for security and availability. Migrating the database tier to Amazon Aurora MySQL would require changing the database engine and potentially modifying the application code to ensure compatibility. Migrating the web tier and the application tier to public subnets would expose them to more security risks and reduce their availability in case of a subnet failure. Migrating the database tier to public subnets would also compromise its security and performance.
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails on one instance: another instance will reprocess the job. The batch jobs run between 12:00 AM and 06 00 AM local time every day. Which solution will provide EC2 instances to meet these requirements MOST costeffectively'?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the batch job uses.
C. Create a new launch template for the Auto Scaling group Set the instances to Spot Instances Set a policy to scale out based on CPU usage.
D. Create a new launch template for the Auto Scaling group Increase the instance size Set a policy to scale out based on CPU usage.
Explanation: This option is the most cost-effective solution because it leverages the Spot Instances, which are unused EC2 instances that are available at up to 90% discount compared to On-Demand prices. Spot Instances can be interrupted by AWS when the demand for On-Demand instances increases, but since the batch jobs are fault-tolerant and can be reprocessed by another instance, this is not a major issue. By using a launch template, the company can specify the configuration of the Spot Instances, such as the instance type, the operating system, and the user data. By using an Auto Scaling group, the company can automatically scale the number of Spot Instances based on the CPU usage, which reflects the load of the batch jobs. This way, the company can optimize the performance and the cost of the EC2 instances for the nightly batch jobs.
A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management Console. The directory service is not compatible with Security Assertion Markup Language (SAML). Which solution meets these requirements?
A. Enable AWS 1AM Identity Center (AWS Single Sign-On) between AWS and the onpremises LDAP.
B. Create an 1AM policy that uses AWS credentials, and integrate the policy into LDAP.
C. Set up a process that rotates the I AM credentials whenever LDAP credentials are updated.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to get short-lived credentials.
Explanation: The solution that meets the requirements is to develop an on-premises
custom identity broker application or process that uses AWS Security Token Service (AWS
STS) to get short-lived credentials. This solution allows the company to use its existing
LDAP directory service to authenticate its users to the AWS Management Console, without
requiring SAML compatibility. The custom identity broker application or process can act as
a proxy between the LDAP directory service and AWS STS, and can request temporary
security credentials for the users based on their LDAP attributes and roles. The users can
then use these credentials to access the AWS Management Console via a sign-in URL
generated by the identity broker. This solution also enhances security by using short-lived
credentials that expire after a specified duration.
The other solutions do not meet the requirements because they either require SAML
compatibility or do not provide access to the AWS Management Console. Enabling AWS
IAM Identity Center (AWS Single Sign-On) between AWS and the on-premises LDAP
would require the LDAP directory service to support SAML 2.0, which is not the case for
this scenario. Creating an IAM policy that uses AWS credentials and integrating the policy
into LDAP would not provide access to the AWS Management Console, but only to the
AWS APIs. Setting up a process that rotates the IAM credentials whenever LDAP
credentials are updated would also not provide access to the AWS Management Console,
but only to the AWS CLI. Therefore, these solutions are not suitable for the given
requirements.
A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure the database access credentials. The company's security team wants to protect the application and the database from SQL injection and other web-based attacks. Which solution will meet these requirements with the LEAST operational overhead?
A. Use security groups and network ACLs to secure the database and application servers.
B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
C. Use AWS Network Firewall to protect the application and the database.
D. Use different database accounts in the application code for different functions. Avoid granting excessive privileges to the database users.
Explanation: AWS WAF is a web application firewall that helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF allows users to create rules that block, allow, or count web requests based on customizable web security rules. One of the types of rules that can be created is an SQL injection rule, which allows users to specify a list of IP addresses or IP address ranges that they want to allow or block. By using AWS WAF to protect the application, the company can prevent SQL injection and other web-based attacks from reaching the application and the database. RDS parameter groups are collections of parameters that define how a database instance operates. Users can modify the parameters in a parameter group to change the behavior and performance of the database. By using RDS parameter groups to configure the security settings, the company can enforce best practices such as disabling remote root login, requiring SSL connections, and limiting the maximum number of connections. The other options are not correct because they do not effectively protect the application and the database from SQL injection and other web-based attacks. Using security groups and network ACLs to secure the database and application servers is not sufficient because they only filter traffic at the network layer, not at the application layer. Using AWS Network Firewall to protect the application and the database is not necessary because it is a stateful firewall service that provides network protection for VPCs, not for individual applications or databases. Using different database accounts in the application code for different functions is a good practice, but it does not prevent SQL injection attacks from exploiting vulnerabilities in the application code.
A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data to Amazon S3. Which solution meets these requirements and is MOST cost-effective?
A. Set up AWS Glue to copy the data from the on-premises servers to Amazon S3.
B. Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
C. Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3.
D. Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the data to Amazon S3.
Explanation: AWS DataSync is a service that makes it easy to move large amounts of data online between on-premises storage and AWS storage services. AWS DataSync can transfer data at speeds up to 10 times faster than open-source tools by using a purposebuilt network protocol and parallelizing data transfers. AWS DataSync also handles encryption, data integrity verification, and bandwidth optimization. To use AWS DataSync, users need to deploy a DataSync agent on their on-premises servers, which connects to the NFS servers and syncs the data to Amazon S3. Users can schedule periodic or onetime sync tasks and monitor the progress and status of the transfers. The other options are not correct because they are either not cost-effective or not suitable for the use case. Setting up AWS Glue to copy the data from the on-premises servers to Amazon S3 is not cost-effective because AWS Glue is a serverless data integration service that is mainly used for extract, transform, and load (ETL) operations, not for simple data backup. Setting up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3 is not cost-effective because AWS Transfer for SFTP is a fully managed service that provides secure file transfer using the SFTP protocol, which is more suitable for exchanging data with third parties than for backing up data. Setting up an AWS Direct Connect connection between the on-premises data center and a VPC, and copying the data to Amazon S3 is not cost-effective because AWS Direct Connect is a dedicated network connection between AWS and the on-premises location, which has high upfront costs and requires additional configuration.
The DNS provider that hosts a company's domain name records is experiencing outages that cause service disruption for a website running on AWS. The company needs to migrate to a more resilient managed DNS service and wants the service to run on AWS. What should a solutions architect do to rapidly migrate the DNS hosting service?
A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the domain records hosted by the previous provider
B. Create an Amazon Route 53 private hosted zone for the domain name Import the zone file containing the domain records hosted by the previous provider.
C. Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS Directory Service for Microsoft Active Directory for the domain records.
D. Create an Amazon Route 53 Resolver inbound endpomt in the VPC. Specify the IP addresses that the provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the domain to the IP addresses that are specified in the inbound endpoint.
Explanation: To migrate the DNS hosting service to a more resilient managed DNS service on AWS, the company should use Amazon Route 53, which is a highly available and scalable cloud DNS web service. Route 53 can host public DNS records for the company’s domain name and provide reliable and secure DNS resolution. To rapidly migrate the DNS hosting service, the company should create a public hosted zone for the domain name in Route 53, which is a container for the domain’s DNS records. Then, the company should import the zone file containing the domain records hosted by the previous provider, which is a text file that defines the DNS records for the domain. This way, the company can quickly transfer the existing DNS records to Route 53 without manually creating them. After importing the zone file, the company should update the domain registrar to use the name servers that Route 53 assigns to the hosted zone. This will ensure that DNS queries for the domain name are routed to Route 53 and resolved by the imported records.
Page 33 out of 81 Pages |
Previous |