SAA-C03 Practice Test Questions

964 Questions


Topic 4: Exam Pool D

A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure. The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data. Which combination of storage and caching should the solutions architect use?


A. Amazon S3 with Amazon CloudFront


B. Amazon S3 Glacier with Amazon ElastiCache


C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront


D. AWS Storage Gateway with Amazon ElastiCache





A.
  Amazon S3 with Amazon CloudFront

Explanation: To store and view engineering drawings with caching support, Amazon S3 and Amazon CloudFront are suitable solutions. Amazon S3 can store any amount of data with high durability, availability, and performance. Amazon CloudFront can distribute the engineering drawings to edge locations closer to the users, which can reduce the latency and improve the user experience. Amazon CloudFront can also cache the engineering drawings at the edge locations, which can minimize the amount of time that users wait for the drawings to load.

A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more than 50% for a short burst of time. However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time, the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?


A. Create Amazon CloudWatch composite alarms where possible.


B. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.


C. Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.


D. Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.





A.
  Create Amazon CloudWatch composite alarms where possible.

Explanation: Composite alarms determine their states by monitoring the states of other alarms. You can **use composite alarms to reduce alarm noise**. For example, you can create a composite alarm where the underlying metric alarms go into ALARM when they meet specific conditions. You then can set up your composite alarm to go into ALARM and send you notifications when the underlying metric alarms go into ALARM by configuring the underlying metric alarms never to take actions.

A company has a stateless web application that runs on AWS Lambda functions that are invoked by Amazon API Gateway. The company v wants to deploy the application across multiple AWS Regions to provide Regional failover capabilities. What should a solutions architect do to route traffic to multiple Regions?


A. Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration.


B. Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health checks to route traffic.


C. Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure the transit gateway to route requests.


D. Create an Application Load Balancer in the primary Region. Set the target group to point to the API Gateway endpoint hostnames in each Region.





C.
  Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure the transit gateway to route requests.

Explanation: This answer is correct because it provides Regional failover capabilities for the online gaming application by using AWS Global Accelerator. AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your public applications. Global Accelerator provides two global static public IPs that act as a fixed entry point to your application endpoints, such as NLBs, in different AWS Regions. Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure. Global Accelerator also terminates TCP and UDP traffic at the edge locations, which reduces the number of hops and improves the network performance. By adding AWS Global Accelerator in front of the NLBs, you can achieve Regional failover for your online gaming application.

A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment variables. A solutions architect needs to ensure that the required permissions are in place to decrypt and use the environment variables. Which steps must the solutions architect take to implement the correct permissions? (Choose two.)


A. Add AWS KMS permissions in the Lambda resource policy.


B. Add AWS KMS permissions in the Lambda execution role.


C. Add AWS KMS permissions in the Lambda function policy.


D. Allow the Lambda execution role in the AWS KMS key policy.


E. Allow the Lambda resource policy in the AWS KMS key policy.





B.
  Add AWS KMS permissions in the Lambda execution role.

D.
  Allow the Lambda execution role in the AWS KMS key policy.

Explanation: B and D are the correct answers because they ensure that the Lambda execution role has the permissions to decrypt and use the environment variables, and that the AWS KMS key policy allows the Lambda execution role to use the key. The Lambda execution role is an IAM role that grants the Lambda function permission to access AWS resources, such as AWS KMS. The AWS KMS key policy is a resource-based policy that controls access to the key. By adding AWS KMS permissions in the Lambda execution role and allowing the Lambda execution role in the AWS KMS key policy, the solutions architect can implement the correct permissions for encrypting and decrypting environment variables.

A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.

What should a solutions architect propose to improve the performance of the workload?


A. Choose a cluster placement group while launching Amazon EC2 instances.


B. Choose dedicated instance tenancy while launching Amazon EC2 instances.


C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.


D. Choose the required capacity reservation while launching Amazon EC2 instances.





A.
  Choose a cluster placement group while launching Amazon EC2 instances.

A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being accessed or are rarely accessed. Which solution will accomplish this goal with the LEAST operational overhead?


A. Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.


B. Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.


C. Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with Amazon Athena.


D. Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon CloudWatch Logs.





A.
  Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.

Explanation: S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity trends, and recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets and generate detailed metrics and reports.

A security team wants to limit access to specific services or actions in all of the team's AWS accounts. All accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained. What should a solutions architect do to accomplish this?


A. Create an ACL to provide access to the services or actions.


B. Create a security group to allow accounts and attach it to user groups.


C. Create cross-account roles in each account to deny access to the services or actions.


D. Create a service control policy in the root organizational unit to deny access to the services or actions.





D.
  Create a service control policy in the root organizational unit to deny access to the services or actions.

Explanation: Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization's access control guidelines.

A company runs an application using Amazon ECS. The application creates esi/ed versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3. How can a solutions architect ensure that the application has permission to access Amazon S3?


A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.


B. Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the task definition.


C. Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster.


D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.





B.
  Create an IAM role with S3 permissions, and then specify that role as the taskRoleAm in the task definition.

A company's web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only. Which configuration will meet this requirement?


A. Configure the security group for the EC2 instances.


B. Configure the security group on the Application Load Balancer.


C. Configure AWS WAF on the Application Load Balancer in a VPC.


D. Configure the network ACL for the subnet that contains the EC2 instances.





C.
  Configure AWS WAF on the Application Load Balancer in a VPC.

An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to increase the disk space without downtime. Which solution meets these requirements with the LEAST amount of effort?


A. Enable storage autoscaling in RDS.


B. Increase the RDS database instance size.


C. Change the RDS database instance storage type to Provisioned IOPS.


D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance





A.
  Enable storage autoscaling in RDS.

A company has a small Python application that processes JSON documents and outputs the results to an on-premises SQL database. The application runs thousands of times each day. The company wants to move the application to the AWS Cloud. The company needs a highly available solution that maximizes scalability and minimizes operational overhead. Which solution will meet these requirements?


A. Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple Amazon EC2 instances to process the documents. Store the results in an Amazon Aurora DB cluster


B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.


C. Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS DB instance.


D. Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages Deploy the Python code as a container on an Amazon Elastic Container Service (Amazon ECS) cluster that is configured with the Amazon EC2 launch type. Use the container to process the SQS messages. Store the results on an Amazon RDS DB instance.





B.
  Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster.

Explanation: By placing the JSON documents in an S3 bucket, the documents will be stored in a highly durable and scalable object storage service. The use of AWS Lambda allows the company to run their Python code to process the documents as they arrive in the S3 bucket without having to worry about the underlying infrastructure. This also allows for horizontal scalability, as AWS Lambda will automatically scale the number of instances of the function based on the incoming rate of requests. The results can be stored in an Amazon Aurora DB cluster, which is a fully-managed, high-performance database service that is compatible with MySQL and PostgreSQL. This will provide the necessary durability and scalability for the results of the processing.

A company produces batch data that comes from different databases. The company also produces live stream data from network sensors and application APIs. The company needs to consolidate all the data into one place for business analytics. The company needs to process the incoming data and then stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the data into a business intelligence tool to show key performance indicators (KPIs).
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)


A. Use Amazon Athena foe one-time queries Use Amazon QuickSight to create dashboards for KPIs


B. Use Amazon Kinesis Data Analytics for one-time queries Use Amazon QuickSight to create dashboards for KPIs


C. Create custom AWS Lambda functions to move the individual records from me databases to an Amazon Redshift duster


D. Use an AWS Glue extract transform, and toad (ETL) job to convert the data into JSON format Load the data into multiple Amazon OpenSearch Service (Amazon Elasticsearch Service) dusters


E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake Use AWS Glue to crawl the source extract the data and load the data into Amazon S3 in Apache Parquet format





A.
  Use Amazon Athena foe one-time queries Use Amazon QuickSight to create dashboards for KPIs

E.
  Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake Use AWS Glue to crawl the source extract the data and load the data into Amazon S3 in Apache Parquet format

Explanation: Amazon Athena is the best choice for running one-time queries on streaming data. Although Amazon Kinesis Data Analytics provides an easy and familiar standard SQL language to analyze streaming data in real-time, it is designed for continuous queries rather than one-time queries[1]. On the other hand, Amazon Athena is a serverless interactive query service that allows querying data in Amazon S3 using SQL. It is optimized for ad-hoc querying and is ideal for running one-time queries on streaming data[2].AWS Lake Formation uses as a central place to have all your data for analytics purposes (E). Athena integrate perfect with S3 and can makes queries (A).


Page 24 out of 81 Pages
Previous