AIF-C01 Practice Test Questions

138 Questions


Which of the following AWS services is best suited for building and deploying machine learning models without managing infrastructure?


A. Amazon EC2


B. Amazon S3


C. Amazon SageMaker


D. AWS Lambda





C.
  Amazon SageMaker

Which of the following Amazon AI services can be used to analyze text and detect the sentiment expressed in it?


A. Amazon Polly


B. Amazon Lex


C. Amazon Comprehend


D. Amazon Rekognition





C.
  Amazon Comprehend

In the context of machine learning, what is overfitting?


A. A model performs well on training data but poorly on unseen data.


B. A model performs equally well on both training and testing data.


C. A model underperforms on both training and testing data.


D. A model performs poorly on training data but well on unseen data.





A.
  A model performs well on training data but poorly on unseen data.

A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions.
Which business objective should the company use to evaluate the effect of the LLM chatbot?


A. Website engagement rate


B. Average call duration


C. Corporate social responsibility


D. Regulatory compliance





B.
  Average call duration

A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.
The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.
Which solution will meet these requirements?


A. Use Amazon SageMaker Serverless Inference to deploy the model.


B. Use Amazon CloudFront to deploy the model.


C. Use Amazon API Gateway to host the model and serve predictions.


D. Use AWS Batch to host the model and serve predictions.





A.
  Use Amazon SageMaker Serverless Inference to deploy the model.

Explanation:
Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a way that allows a web application to use the model without the need to manage the underlying infrastructure.
Amazon SageMaker Serverless Inference provides a fully managed environment for deploying machine learning models. It automatically provisions, scales, and manages the infrastructure required to host the model, removing the need for the company to manage servers or other underlying infrastructure.
Why Option A is Correct:
Why Other Options are Incorrect:
Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without managing any underlying infrastructure.

A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months.
Which AWS solution should the company use to automate the generation of graphs?


A. Amazon Q in Amazon EC2


B. Amazon Q Developer


C. Amazon Q in Amazon QuickSight


D. Amazon Q in AWS Chatbot





C.
  Amazon Q in Amazon QuickSight

A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words to fill in the missing text.
Which type of model meets this requirement?


A. Topic modeling


B. Clustering models


C. Prescriptive ML models


D. BERT-based models





D.
  BERT-based models

Explanation: BERT-based models (Bidirectional Encoder Representations from Transformers) are suitable for tasks that involve understanding the context of words in a sentence and suggesting missing words. These models use bidirectional training, which considers the context from both directions (left and right of the missing word) to predict the appropriate word to fill in the gaps.

A company uses a foundation model (FM) from Amazon Bedrock for an AI search tool. The company wants to fine-tune the model to be more accurate by using the company's data.
Which strategy will successfully fine-tune the model?


A. Provide labeled data with the prompt field and the completion field.


B. Prepare the training dataset by creating a .txt file that contains multiple lines in .csv format.


C. Purchase Provisioned Throughput for Amazon Bedrock.


D. Train the model on journals and textbooks.





A.
  Provide labeled data with the prompt field and the completion field.

A company wants to make a chatbot to help customers. The chatbot will help solve technical problems without human intervention. The company chose a foundation model (FM) for the chatbot. The chatbot needs to produce responses that adhere to company tone.
Which solution meets these requirements?


A. Set a low limit on the number of tokens the FM can produce.


B. Use batch inferencing to process detailed responses.


C. Experiment and refine the prompt until the FM produces the desired responses.


D. Define a higher number for the temperature parameter.





C.
  Experiment and refine the prompt until the FM produces the desired responses.

Explanation: Experimenting and refining the prompt is the best approach to ensure that the chatbot using a foundation model (FM) produces responses that adhere to the company's tone.

A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model's responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation.
Which AWS service meets these requirements?


A. Amazon S3


B. Amazon Elastic Block Store (Amazon EBS)


C. Amazon Elastic File System (Amazon EFS)


D. AWS Snowcone





A.
  Amazon S3

How can companies use large language models (LLMs) securely on Amazon Bedrock?


A. Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.


B. Enable AWS Audit Manager for automatic model evaluation jobs.


C. Enable Amazon Bedrock automatic model evaluation jobs.


D. Use Amazon CloudWatch Logs to make models explainable and to monitor for bias.





A.
  Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.

A company wants to build an ML model by using Amazon SageMaker. The company needs to share and manage variables for model development across multiple teams.
Which SageMaker feature meets these requirements?


A. Amazon SageMaker Feature Store


B. Amazon SageMaker Data Wrangler


C. Amazon SageMaker Clarify


D. Amazon SageMaker Model Cards





A.
  Amazon SageMaker Feature Store


Page 3 out of 12 Pages
Previous