A company has a recommendation system. The system's applications run on Amazon EC2
instances. The applications make API calls to Amazon Bedrock foundation models (FMs) to
analyze customer behavior and generate personalized product recommendations.
The system is experiencing intermittent issues. Some recommendations do not match
customer preferences. The company needs an observability solution to monitor operational
metrics and detect patterns of operational performance degradation compared to
established baselines. The solution must also generate alerts with correlation data within
10 minutes when FM behavior deviates from expected patterns.
Which solution will meet these requirements?
A. Configure Amazon CloudWatch Container Insights for the application infrastructure. Set up CloudWatch alarms for latency thresholds. Add custom metrics for token counts by using the CloudWatch embedded metric format. Create CloudWatch dashboards to visualize the data.
B. Implement AWS X-Ray to trace requests through the application components. Enable CloudWatch Logs Insights for error pattern detection. Set up AWS CloudTrail to monitor all API calls to Amazon Bedrock. Create custom dashboards in Amazon QuickSight.
C. Enable Amazon CloudWatch Application Insights for the application resources. Create custom metrics for recommendation quality, token usage, and response latency by using the CloudWatch embedded metric format with dimensions for request types and user segments. Configure CloudWatch anomaly detection on the model metrics. Establish log pattern analysis by using CloudWatch Logs Insights.
D. Use Amazon OpenSearch Service with the Observability plugin. Ingest model metrics and logs by using Amazon Kinesis. Create custom Piped Processing Language (PPL) queries to analyze model behavior patterns. Establish operational dashboards to visualize anomalies in real time.
A university recently digitized a collection of archival documents, academic journals, and
manuscripts. The university stores the digital files in an AWS Lake Formation data lake.
The university hires a GenAI developer to build a solution to allow users to search the
digital files by using text queries. The solution must return journal abstracts that are
semantically similar to a user's query. Users must be able to search the digitized collection
based on text and metadata that is associated with the journal abstracts. The metadata of
the digitized files does not contain keywords. The solution must match similar abstracts to
one another based on the similarity of their text. The data lake contains fewer than 1 million
files.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Titan Embeddings in Amazon Bedrock to create vector representations of the digitized files. Store embeddings in the OpenSearch Neural plugin for Amazon OpenSearch Service.
B. Use Amazon Comprehend to extract topics from the digitized files. Store the topics and file metadata in an Amazon Aurora PostgreSQL database. Query the abstract metadata against the data in the Aurora database.
C. Use Amazon SageMaker AI to deploy a sentence-transformer model. Use the model to create vector representations of the digitized files. Store embeddings in an Amazon Aurora PostgreSQL database that has the pgvector extension.
D. Use Amazon Titan Embeddings in Amazon Bedrock to create vector representations of the digitized files. Store embeddings in an Amazon Aurora PostgreSQL Serverless database that has the pgvector extension.
A company is building a generative AI (GenAI) application that uses Amazon Bedrock APIs
to process complex customer inquiries. During peak usage periods, the application
experiences intermittent API timeouts that cause issues such as broken response chunks
and delayed data delivery. The application struggles to ensure that prompts remain within
token limits when handling complex customer inquiries of varying lengths. Users have
reported truncated inputs and incomplete responses. The company has also observed foundation model (FM) invocation failures.
The company needs a retry strategy that automatically handles transient service errors and
prevents overwhelming Amazon Bedrock during peak usage periods. The strategy must
also adapt to changing service availability and support response streaming and tokenaware
request handling.
Which solution will meet these requirements?
A. Implement a standard retry strategy that uses a 1-second fixed delay between attempts and a 3-retry maximum for all errors. Handle streaming response timeouts by restarting streams. Cap token usage for each session.
B. Implement an adaptive retry strategy that uses exponential backoff with jitter and a circuit breaker pattern that temporarily disables retries when error rates exceed a predefined threshold. Implement a streaming response handler that monitors for chunk delivery timeouts. Configure the handler to buffer successfully received chunks and intelligently resume streaming from the last received chunk when connections are reestablished.
C. Use the AWS SDK to configure a retry strategy in standard mode. Wrap Amazon Bedrock API calls in try-catch blocks that handle timeout exceptions. Return cached completions for failed streaming requests. Enforce a global token limit for all users. Add jitter-based retry logic and lightweight token trimming for each request. Resume broken streams by requesting only missing chunks from the point of failure. Maintain a small inmemory buffer of the most recent chunks.
D. Set Amazon Bedrock client request timeouts to 30 seconds. Implement client-side load shedding. Buffer partial results and stop new requests when application performance degrades. Set static token usage caps for all requests. Configure exponential backoff retries, dynamic chunk sizing, and context-aware token limits.
A company runs a generative AI (GenAI)-powered summarization application in an
application AWS account that uses Amazon Bedrock. The application architecture includes
an Amazon API Gateway REST API that forwards requests to AWS Lambda functions that
are attached to private VPC subnets. The application summarizes sensitive customer
records that the company stores in a governed data lake in a centralized data storage
account. The company has enabled Amazon S3, Amazon Athena, and AWS Glue in the
data storage account.
The company must ensure that calls that the application makes to Amazon Bedrock use
only private connectivity between the company's application VPC and Amazon Bedrock.The company's data lake must provide fine-grained column-level access across the
company's AWS accounts.
Which solution will meet these requirements?
A. In the application account, create interface VPC endpoints for Amazon Bedrock runtimes. Run Lambda functions in private subnets. Use IAM conditions on inference and data-plane policies to allow calls only to approved endpoints and roles. In the data storage account, use AWS Lake Formation LF-tag-based access control to create table-level and column-level cross-account grants.
B. Run Lambda functions in private subnets. Configure a NAT gateway to provide access to Amazon Bedrock and the data lake. Use S3 bucket policies and ACLs to manage permissions. Export AWS CloudTrail logs to Amazon S3 to perform weekly reviews.
C. Create a gateway endpoint only for Amazon S3 in the application account. Invoke Amazon Bedrock through public endpoints. Use database-level grants in AWS Lake Formation to manage data access. Stream AWS CloudTrail logs to Amazon CloudWatch Logs. Do not set up metric filters or alarms.
D. Use VPC endpoints to provide access to Amazon Bedrock and Amazon S3 in the application account. Use only IAM path-based policies to manage data lake access. Send AWS CloudTrail logs to Amazon CloudWatch Logs. Periodically create dashboards and allow public fallback for cross-Region reads to reduce setup time.
An ecommerce company is using Amazon Bedrock to build a generative AI (GenAI)
application. The application uses AWS Step Functions to orchestrate a multi-agent
workflow to produce detailed product descriptions. The workflow consists of three
sequential states: a description generator, a technical specifications validator, and a brand
voice consistency checker. Each state produces intermediate reasoning traces and outputs
that are passed to the next state. The application uses an Amazon S3 bucket for process
storage and to store outputs.
During testing, the company discovers that outputs between Step Functions states
frequently exceed the 256 KB quota and cause workflow failures. A GenAI Developer
needs to revise the application architecture to efficiently handle the Step Functions 256 KB
quota and maintain workflow observability. The revised architecture must preserve the
existing multi-agent reasoning and acting (ReAct) pattern.
Which solution will meet these requirements with the LEAST operational overhead?
A. Store intermediate outputs in Amazon DynamoDB. Pass only references between states. Create a Map state that retrieves the complete data from DynamoDB when required for each agent's processing step.
B. Configure an Amazon Bedrock integration to use the S3 bucket URI in the input parameters for large outputs. Use the ResultPath and ResultSelector fields to route S3 references between the agent steps while maintaining the sequential validation workflow.
C. Use AWS Lambda functions to compress outputs to less than 256 KB before each agent state. Configure each agent task to decompress outputs before processing and to compress results before passing them to the next state.
D. Configure a separate Step Functions state machine to handle each agent’s processing. Use Amazon EventBridge to coordinate the execution flow between state machines. Use S3 references for the outputs as event data.
A company upgraded its Amazon Bedrock–powered foundation model (FM) that supports a
multilingual customer service assistant. After the upgrade, the assistant exhibited
inconsistent behavior across languages. The assistant began generating different
responses in some languages when presented with identical questions.
The company needs a solution to detect and address similar problems for future updates.
The evaluation must be completed within 45 minutes for all supported languages. The
evaluation must process at least 15,000 test conversations in parallel. The evaluation
process must be fully automated and integrated into the CI/CD pipeline. The solution must
block deployment if quality thresholds are not met.
Which solution will meet these requirements?
A. Create a distributed traffic simulation framework that sends translation-heavy workloads to the assistant in multiple languages simultaneously. Use Amazon CloudWatch metrics to monitor latency, concurrency, and throughput. Run simulations before production releases to identify infrastructure bottlenecks.
B. Deploy the assistant in multiple AWS Regions with Amazon Route 53 latency-based routing and AWS Global Accelerator to improve global performance. Store multilingual conversation logs in Amazon S3. Perform weekly post-deployment audits to review consistency.
C. Create a pre-processing pipeline that normalizes all incoming messages into a consistent format before sending the messages to the assistant. Apply rule-based checks to flag potential hallucinations in the outputs. Focus evaluation on normalized text to simplify testing across languages.
D. Set up standardized multilingual test conversations with identical meaning. Run the test conversations in parallel by using Amazon Bedrock model evaluation jobs. Apply similarity and hallucination thresholds. Integrate the process into the CI/CD pipeline to block releases that fail.
A healthcare company is using Amazon Bedrock to build a system to help practitioners
make clinical decisions. The system must provide treatment recommendations to
physicians based only on approved medical documentation and must cite specific sources.
The system must not hallucinate or produce factually incorrect information.
Which solution will meet these requirements with the LEAST operational overhead?
A. Integrate Amazon Bedrock with Amazon Kendra to retrieve approved documents. Implement custom post-processing to compare generated responses against source documents and to include citations.
B. Deploy an Amazon Bedrock Knowledge Base and connect it to approved clinical source documents. Use the Amazon Bedrock RetrieveAndGenerate API to return citations from the knowledge base.
C. Use Amazon Bedrock and Amazon Comprehend Medical to extract medical entities. Implement verification logic against a medical terminology database.
D. Use an Amazon Bedrock knowledge base with Retrieve API calls and InvokeModel API calls to retrieve approved clinical source documents. Implement verification logic to compare against retrieved sources and to cite sources.
A retail company has a generative AI (GenAI) product recommendation application that
uses Amazon Bedrock. The application suggests products to customers based on browsing
history and demographics. The company needs to implement fairness evaluation across
multiple demographic groups to detect and measure bias in recommendations between two
prompt approaches. The company wants to collect and monitor fairness metrics in real
time. The company must receive an alert if the fairness metrics show a discrepancy of
more than 15% between demographic groups. The company must receive weekly reports
that compare the performance of the two prompt approaches.
Which solution will meet these requirements with the LEAST custom development effort?
A. Configure an Amazon CloudWatch dashboard to display default metrics from Amazon Bedrock API calls. Create custom metrics based on model outputs. Set up Amazon EventBridge rules to invoke AWS Lambda functions that perform post-processing analysis on model responses and publish custom fairness metrics.
B. Create the two prompt variants in Amazon Bedrock Prompt Management. Use Amazon Bedrock Flows to deploy the prompt variants with defined traffic allocation. Configure Amazon Bedrock guardrails to monitor demographic fairness. Set up Amazon CloudWatch alarms on the GuardrailContentSource dimension by using InvocationsIntervened metrics to detect recommendation discrepancy threshold violations.
C. Set up Amazon SageMaker Clarify to analyze model outputs. Publish fairness metrics to Amazon CloudWatch. Create CloudWatch composite alarms that combine SageMaker Clarify bias metrics with Amazon Bedrock latency metrics.
D. Create an Amazon Bedrock model evaluation job to compare fairness between the two prompt variants. Enable model invocation logging in Amazon CloudWatch. Set up CloudWatch alarms for InvocationsIntervened metrics with a dimension for each demographic group.
A financial services company uses multiple foundation models (FMs) through Amazon
Bedrock for its generative AI (GenAI) applications. To comply with a new regulation for
GenAI use with sensitive financial data, the company needs a token management solution.
The token management solution must proactively alert when applications approach modelspecific
token limits. The solution must also process more than 5,000 requests each minute
and maintain token usage metrics to allocate costs across business units.
Which solution will meet these requirements?
A. Develop model-specific tokenizers in an AWS Lambda function. Configure the Lambda function to estimate token usage before sending requests to Amazon Bedrock. Configure the Lambda function to publish metrics to Amazon CloudWatch and trigger alarms when requests approach thresholds. Store detailed token usage in Amazon DynamoDB to report costs.
B. Implement Amazon Bedrock Guardrails with token quota policies. Capture metrics on rejected requests. Configure Amazon EventBridge rules to trigger notifications based on Amazon Bedrock Guardrails metrics. Use Amazon CloudWatch dashboards to visualize token usage trends across models.
C. Deploy an Amazon SQS dead-letter queue for failed requests. Configure an AWS Lambda function to analyze token-related failures. Use Amazon CloudWatch Logs Insights to generate reports on token usage patterns based on error logs from Amazon Bedrock API responses.
D. Use Amazon API Gateway to create a proxy for all Amazon Bedrock API calls. Configure request throttling based on custom usage plans with predefined token quotas. Configure API Gateway to reject requests that will exceed token limits.
Example Corp provides a personalized video generation service that millions of enterprise
customers use. Customers generate marketing videos by submitting prompts to the
company’s proprietary generative AI (GenAI) model. To improve output relevance and
personalization, Example Corp wants to enhance the prompts by using customer-specific
context such as product preferences, customer attributes, and business history.
The customers have strict data governance requirements. The customers must retain full
ownership and control over their own data. The customers do not require real-time access.
However, semantic accuracy must be high and retrieval latency must remain low to support
customer experience use cases.
Example Corp wants to minimize architectural complexity in its integration pattern. Example
Corp does not want to deploy and manage services in each customer’s environment unless
necessary.
Which solution will meet these requirements?
A. Ensure that each customer sets up an Amazon Q Business index that includes the customer’s internal data. Ensure that each customer designates Example Corp as a data accessor to allow Example Corp to retrieve relevant content by using a secure API to enrich prompts at runtime.
B. Use federated search with Model Context Protocol (MCP) by deploying real-time MCP servers for each customer. Retrieve data in real time during prompt generation.
C. Ensure that each customer configures an Amazon Bedrock knowledge base. Allow cross-account querying so Example Corp can retrieve structured data for prompt augmentation.
D. Configure Amazon Kendra to crawl customer data sources. Share the resulting indexes across accounts so Example Corp can query each customer’s Amazon Kendra index to retrieve augmentation data.
An ecommerce company operates a global product recommendation system that needs to
switch between multiple foundation models (FM) in Amazon Bedrock based on regulations,cost optimization, and performance requirements. The company must apply custom
controls based on proprietary business logic, including dynamic cost thresholds, AWS
Region-specific compliance rules, and real-time A/B testing across multiple FMs.
The system must be able to switch between FMs without deploying new code. The system
must route user requests based on complex rules including user tier, transaction value,
regulatory zone, and real-time cost metrics that change hourly and require immediate
propagation across thousands of concurrent requests.
Which solution will meet these requirements?
A. Deploy an AWS Lambda function that uses environment variables to store routing rules and Amazon Bedrock FM IDs. Use the Lambda console to update the environment variables when business requirements change. Configure an Amazon API Gateway REST API to read request parameters to make routing decisions.
B. Deploy Amazon API Gateway REST API request transformation templates to implement routing logic based on request attributes. Store Amazon Bedrock FM endpoints as REST API stage variables. Update the variables when the system switches between models.
C. Configure an AWS Lambda function to fetch routing configurations from the AWS AppConfig Agent for each user request. Run business logic in the Lambda function to select the appropriate FM for each request. Expose the FM through a single Amazon API Gateway REST API endpoint.
D. Use AWS Lambda authorizers for an Amazon API Gateway REST API to evaluate routing rules that are stored in AWS AppConfig. Return authorization contexts based on business logic. Route requests to model-specific Lambda functions for each Amazon Bedrock FM.
A financial services company needs to build a document analysis system that uses
Amazon Bedrock to process quarterly reports. The system must analyze financial data,
perform sentiment analysis, and validate compliance across batches of reports. Each batch
contains 5 reports. Each report requires multiple foundation model (FM) calls. The solution
must finish the analysis within 10 seconds for each batch. Current sequential processing
takes 45 seconds for each batch.
Which solution will meet these requirements?
A. Use AWS Lambda functions with provisioned concurrency to process each analysis type sequentially. Configure the Lambda function timeouts to 10 seconds. Configure automatic retries with exponential backoff.
B. Use AWS Step Functions with a Parallel state to invoke separate AWS Lambda functions for each analysis type simultaneously. Configure Amazon Bedrock client timeouts. Use Amazon CloudWatch metrics to track execution time and model inference latency.
C. Create an Amazon SQS queue to buffer analysis requests. Deploy multiple AWS Lambda functions with reserved concurrency. Configure each Lambda function to process different aspects of each report sequentially and then combine the results.
D. Deploy an Amazon ECS cluster that runs containers that process each report sequentially. Use a load balancer to distribute batch workloads. Configure an auto-scaling policy based on CPU utilization.
| Page 3 out of 9 Pages |
| 234 |
| AIP-C01 Practice Test Home |
Real-World Scenario Mastery: Our AIP-C01 practice exam don't just test definitions. They present you with the same complex, scenario-based problems you'll encounter on the actual exam.
Strategic Weakness Identification: Each practice session reveals exactly where you stand. Discover which domains need more attention, before AWS Certified Generative AI Developer - Professional exam day arrives.
Confidence Through Familiarity: There's no substitute for knowing what to expect. When you've worked through our comprehensive AIP-C01 practice exam questions pool covering all topics, the real exam feels like just another practice session.