DBS-C01 Practice Test Questions

200 Questions


A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three
Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one
mediumsized primary instance, one large-sized replica, and two medium sized replicas. The Database
Specialist did not assign a promotion tier to the replicas.
In the event of a primary failure, what will occur?


A.

Aurora will promote an Aurora Replica that is of the same size as the primary instance


B.

Aurora will promote an arbitrary Aurora Replica


C.

Aurora will promote the largest-sized Aurora Replica


D.

Aurora will not promote an Aurora Replica





A.
  

Aurora will promote an Aurora Replica that is of the same size as the primary instance



An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The
database should be designated to support the following use cases:
Update scores in real time whenever a player is playing the game.
Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each
game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?


A.

Create a global secondary index with game_id as the partition key


B.

Create a global secondary index with user_id as the partition key


C.

Create a composite primary key with game_id as the partition key and user_id as the sort key


D.

Create a composite primary key with user_id as the partition key and game_id as the sort key





B.
  

Create a global secondary index with user_id as the partition key



An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application.
The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and
reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is
contributing to the increased costs. The company wants to control these costs without significant development
efforts.
How should a Database Specialist address these requirements?


A.

Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB


B.

Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into
AmazonRedshift


C.

Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance


D.

Use DynamoDB Accelerator to offload the reads





B.
  

Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into
AmazonRedshift



A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon
ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a
replication group deployed with two additional replicas. The company is planning for a worldwide gaming
event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?


A.

Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted
Setacross all nodes in the cluster.


B.

Increase the size of the ElastiCache cluster nodes to a larger instance size.


C.

Create an additional ElastiCache cluster and load-balance traffic between the two clusters.


D.

Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.





B.
  

Increase the size of the ElastiCache cluster nodes to a larger instance size.



A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the
migration, the company discovered there is a period of time every day around 3:00 PM where the response
time of the application is noticeably slower. The company has narrowed down the cause of this issue to the
database and not the application.
Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL
query?



A.

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and
diskspace consumption. Watch these dashboards during the next slow period.


B.

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring
toolthat will run reports based on the output error logs.


C.

Modify the logging database parameter to log all the queries related to locking in the database and
thencheck the logs after the next slow period for this information.


D.

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify
anyqueries that are related to spikes in the graph during the next slow period.





D.
  

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify
anyqueries that are related to spikes in the graph during the next slow period.



A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and
two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and
application performance is poor for several minutes. During this time, application servers in all Availability
Zones are healthy and responding normally.
What should the company do to eliminate this application performance issue?


A.

Configure both of the Aurora Replicas to the same instance class as the primary DB instance.
Enablecache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and
assign afailover priority of tier-1 to the replicas.


B.

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which
instancehas failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be
theprimary DB instance. Configure an Amazon RDS event subscription to send a notification to an
AmazonSNS topic to which the Lambda function is subscribed.


C.

Configure one Aurora Replica to have the same instance class as the primary DB instance.
ImplementAurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the
primary DBinstance and one replica with the same instance class. Set the failover priority to tier-1 for
the otherreplicas.


D.

Configure both Aurora Replicas to have the same instance class as the primary DB instance.
ImplementAurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the
primary DBinstance and to tier-1 for the replicas





D.
  

Configure both Aurora Replicas to have the same instance class as the primary DB instance.
ImplementAurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the
primary DBinstance and to tier-1 for the replicas



A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database
to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration.
The solution must also be cost-effective.
Which approach should the Database Specialist take?


A.

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp).
Rundata transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.


B.

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once
theSnowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration
tomigrate the data directly from Amazon S3 to Amazon RDS.


C.

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during
theschema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.


D.

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an
AmazonEC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon
EC2 to anAurora DB cluster.





D.
  

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an
AmazonEC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon
EC2 to anAurora DB cluster.



A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated
infrastructure for an Application team using a development AWS account. The team wants a deployment
method that will standardize the core solution components while managing environment-specific settings
separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?


A.

Organize common and environmental-specific parameters hierarchically in the AWS Systems
ManagerParameter Store, then reference the parameters dynamically from an AWS CloudFormation
template.Deploy the CloudFormation stack using the environment name as a parameter.


B.

Create a parameterized AWS CloudFormation template that builds the required objects. Keep
separateenvironment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command
that deploysthe CloudFormation stack directly referencing the appropriate parameter bucket.


C.

Create a parameterized AWS CloudFormation template that builds the required objects. Import
thetemplate into the CloudFormation interface in the AWS Management Console. Make the required


D.

changesto the parameters and deploy the CloudFormation stack.
Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the
requiredparameter values in a test event in the Lambda console for each environment that the
Application team canmodify, as needed. Deploy the infrastructure by triggering the test event in the
console.





C.
  

Create a parameterized AWS CloudFormation template that builds the required objects. Import
thetemplate into the CloudFormation interface in the AWS Management Console. Make the required



The Security team for a finance company was notified of an internal security breach that happened 3 weeks
ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL
cluster for the Security team to use for monitoring and alerting. The Security team is required to perform
real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted
files to the chosen solution.
Which approach will meet these requirements?


A.

Use pg_audit to generate audit logs and send the logs to the Security team.


B.

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.


C.

Set up database activity streams and connect the data stream from Amazon Kinesis to consumer
applications.


D.

Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.





B.
  

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.



A company has an on-premises system that tracks various database operations that occur over the lifetime of a
database, including database shutdown, deletion, creation, and backup.
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy
these requirements. The data could be used by other systems within the company. Which solution will meet these requirements with minimal effort?


A.

Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS.
Create an AWS Lambda function to act on these rules and write the output to the tracking systems.


B.

Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API
calls and write the output to the tracking systems.


C.

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system
notifications.


D.

Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.





C.
  

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system
notifications.



A global digital advertising company captures browsing metadata to contextually display relevant
images,pages, and links to targeted users. A single page load can generate multiple events that need to be
storedindividually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load
mustquery the user’s browsing history to provide targeting recommendations. The advertising company
expectsover 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The
structureof the metadata varies depending on the event. Additionally, the browsing metadata must be written
and readwith very low latency to ensure a good viewing experience for the users.
Which database solution meets these requirements?


A.

Amazon DocumentDB


B.

Amazon RDS Multi-AZ deployment


C.

Amazon DynamoDB global table


D.

Amazon Aurora Global Database





C.
  

Amazon DynamoDB global table



A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL
DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large
binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with
an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the
largest replication instances.
How should the Database Specialist optimize the database migration using AWS DMS?


A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBstogether


B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2without LOBs


C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500
MB andtask 2 without LOBs


D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data andLOBs together





C.
  

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500
MB andtask 2 without LOBs




Page 7 out of 17 Pages
Previous