AIF-C01 Practice Test Questions

138 Questions


Which AWS service or feature can help an AI development team quickly deploy and consume a foundation model (FM) within the team's VPC?


A. Amazon Personalize


B. Amazon SageMaker JumpStart


C. PartyRock, an Amazon Bedrock Playground


D. Amazon SageMaker endpoints





B.
  Amazon SageMaker JumpStart

Which strategy evaluates the accuracy of a foundation model (FM) that is used in image classification tasks?


A. Calculate the total cost of resources used by the model.


B. Measure the model's accuracy against a predefined benchmark dataset.


C. Count the number of layers in the neural network.


D. Assess the color accuracy of images processed by the model.





B.
  Measure the model's accuracy against a predefined benchmark dataset.

Explanation:
Measuring the model's accuracy against a predefined benchmark dataset is the correct strategy to evaluate the accuracy of a foundation model (FM) used in image classification tasks.
Model Accuracy Evaluation:
Why Option B is Correct:
Why Other Options are Incorrect:

What does an F1 score measure in the context of foundation model (FM) performance?


A. Model precision and recall.


B. Model speed in generating responses.


C. Financial cost of operating the model.


D. Energy efficiency of the model's computations.





A.
  Model precision and recall.

Explanation: The F1 score is the harmonic mean of precision and recall, making it a balanced metric for evaluating model performance when there is an imbalance between false positives and false negatives. Speed, cost, and energy efficiency are unrelated to the F1 score. References: AWS Foundation Models Guide.

A company built an AI-powered resume screening system. The company used a large dataset to train the model. The dataset contained resumes that were not representative of all demographics. Which core dimension of responsible AI does this scenario present?


A. Fairness.


B. Explainability.


C. Privacy and security.


D. Transparency.





A.
  Fairness.

Explanation: Fairness refers to the absence of bias in AI models. Using nonrepresentative datasets leads to biased predictions, affecting specific demographics unfairly. Explainability, privacy, and transparency are important but not directly related to this scenario.

A company wants to classify human genes into 20 categories based on gene characteristics. The company needs an ML algorithm to document how the inner mechanism of the model affects the output.
Which ML algorithm meets these requirements?


A. Decision trees


B. Linear regression


C. Logistic regression


D. Neural networks





A.
  Decision trees

Explanation:
Decision trees are an interpretable machine learning algorithm that clearly documents the decision-making process by showing how each input feature affects the output. This transparency is particularly useful when explaining how the model arrives at a certain decision, making it suitable for classifying genes into categories.
Option A (Correct): "Decision trees": This is the correct answer because decision trees provide a clear and interpretable representation of how input features influence the model's output, making it ideal for understanding the inner mechanisms affecting predictions.
Option B: "Linear regression" is incorrect because it is used for regression tasks, not classification.
Option C: "Logistic regression" is incorrect as it does not provide the same level of interpretability in documenting decision-making processes.
Option D: "Neural networks" is incorrect because they are often considered "black boxes" and do not easily explain how they arrive at their outputs.E AWS AI Practitioner References:
Interpretable Machine Learning Models on AWS: AWS supports using interpretable models, such as decision trees, for tasks that require clear documentation of how input data affects output decisions.

Which term describes the numerical representations of real-world objects and concepts that AI and natural language processing (NLP) models use to improve understanding of textual information?


A. Embeddings


B. Tokens


C. Models


D. Binaries





A.
  Embeddings

Explanation:
Embeddings are numerical representations of objects (such as words, sentences, or documents) that capture the objects' semantic meanings in a form that AI and NLP models can easily understand. These representations help models improve their understanding of textual information by representing concepts in a continuous vector space.
Option A (Correct): "Embeddings": This is the correct term, as embeddings provide a way for models to learn relationships between different objects in their input space, improving their understanding and processing capabilities.
Option B: "Tokens" are pieces of text used in processing, but they do not capture semantic meanings like embeddings do.
Option C: "Models" are the algorithms that use embeddings and other inputs, not the representations themselves.
Option D: "Binaries" refer to data represented in binary form, which is unrelated to the concept of embeddings.
AWS AI Practitioner References:
Understanding Embeddings in AI and NLP: AWS provides resources and tools, like Amazon SageMaker, that utilize embeddings to represent data in formats suitable for machine learning models.

A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.
Which factor will drive the inference costs?


A. Number of tokens consumed


B. Temperature value


C. Amount of data used to train the LLM


D. Total training time





A.
  Number of tokens consumed

An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the user who has asked the question.
Which solution meets these requirements with the LEAST implementation effort?


A. Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support.


B. Add a role description to the prompt context that instructs the model of the age range that the response should target.


C. Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user.


D. Summarize the response text depending on the age of the user so that younger users receive shorter responses.





B.
  Add a role description to the prompt context that instructs the model of the age range that the response should target.

A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large database of research papers.
After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers.
How can the company improve the performance of the chatbot?


A. Use few-shot prompting to define how the FM can answer the questions.


B. Use domain adaptation fine-tuning to adapt the FM to complex scientific terms.


C. Change the FM inference parameters.


D. Clean the research paper data to remove complex scientific terms.





B.
  Use domain adaptation fine-tuning to adapt the FM to complex scientific terms.

A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are stored as PDF files.
Which solution meets these requirements MOST cost-effectively?


A. Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.


B. Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock.


C. Use all the PDF documents to fine-tune a model with Amazon Bedrock. Use the finetuned model to process user prompts.


D. Upload PDF documents to an Amazon Bedrock knowledge base. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.





A.
  Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.

A company is using the Generative AI Security Scoping Matrix to assess security responsibilities for its solutions. The company has identified four different solution scopes based on the matrix.
Which solution scope gives the company the MOST ownership of security responsibilities?


A. Using a third-party enterprise application that has embedded generative AI features.


B. Building an application by using an existing third-party generative AI foundation model (FM).


C. Refining an existing third-party generative AI foundation model (FM) by fine-tuning the model by using data specific to the business.


D. Building and training a generative AI model from scratch by using specific data that a customer owns.





D.
  Building and training a generative AI model from scratch by using specific data that a customer owns.

Which AWS feature records details about ML instance data for governance and reporting?


A. Amazon SageMaker Model Cards


B. Amazon SageMaker Debugger


C. Amazon SageMaker Model Monitor


D. Amazon SageMaker JumpStart





A.
  Amazon SageMaker Model Cards

Explanation:
Amazon SageMaker Model Cards provide a centralized and standardized repository for documenting machine learning models. They capture key details such as the model's intended use, training and evaluation datasets, performance metrics, ethical considerations, and other relevant information. This documentation facilitates governance and reporting by ensuring that all stakeholders have access to consistent and comprehensive information about each model. While Amazon SageMaker Debugger is used for real-time debugging and monitoring during training, and Amazon SageMaker Model Monitor tracks deployed models for data and prediction quality, neither offers the comprehensive documentation capabilities of Model Cards. Amazon SageMaker JumpStart provides pre-built models and solutions but does not focus on governance documentation.


Page 4 out of 12 Pages
Previous