Data-Architect Practice Test Questions

257 Questions


A large retail company has recently chosen SF as its CRM solution. They have the following record counts:
2500000 accounts
25000000 contacts
When doing an initial performance test, the data architect noticed an extremely slow response for reports and list views. What should a data architect do to solve the performance issue?


A.

Load only the data that the users is permitted to access


B.

Add custom indexes on frequently searched account and contact objects fields


C.

Limit data loading to the 2000 most recently created records.


D.

Create a skinny table to represent account and contact objects.





B.
  

Add custom indexes on frequently searched account and contact objects fields



Explanation:

✅ B. Add custom indexes on frequently searched fields When working with large data volumes (millions of records), query performance becomes dependent on how well the data can be indexed and filtered.
Salesforce uses selective filters and indexed fields to improve the performance of:
1. Reports
2. List views
3. SOQL queries
Adding custom indexes to commonly filtered fields (e.g., Email, Status, CreatedDate, or Custom Category fields) significantly improves performance by avoiding full table scans.

Why Not the Others?

❌ A. Load only data the user is permitted to access
While it’s good practice to enforce data access controls, this does not directly resolve performance issues for reports and views if queries are still non-selective or unindexed.
Also, Salesforce inherently applies user sharing rules when retrieving records.

❌ C. Limit loading to 2000 records
This defeats the purpose of using Salesforce to store and manage all relevant customer data.
Artificially limiting the data set prevents complete reporting and user access.

❌ D. Create a skinny table
Skinny tables are a backend performance optimization that Salesforce Support must create.
They are helpful but are not the first step. Custom indexes should be evaluated and implemented before requesting a skinny table.
Also, skinny tables don’t support all field types and aren’t automatically updated with schema changes.

Get Cloudy Consulting needs to evaluate the completeness and consistency of contact information in Salesforce. Their sales reps often have incomplete information about their accounts and contacts. Additionally, they are not able to interpret the information in a consistent manner. Get Cloudy Consulting has identified certain ""key"" fields which are important to their sales reps. What are two actions Get Cloudy Consulting can take to review their data for completeness and consistency? (Choose two.)


A.

Run a report which shows the last time the key fields were updated.


B.

Run one report per key field, grouped by that field, to understand its data variability.


C.

Run a report that shows the percentage of blanks for the important fields.


D.

Run a process that can fill in default values for blank fields.





B.
  

Run one report per key field, grouped by that field, to understand its data variability.



C.
  

Run a report that shows the percentage of blanks for the important fields.



Explanation:

Option B (✔️ Measures Consistency) – Grouping by key fields (e.g., "Country" or "Lead Source") reveals inconsistent formats (e.g., "USA" vs. "U.S.A").
Example: A report grouped by Phone field shows variations like "(123) 456-7890" vs. "1234567890".

Option C (✔️ Measures Completeness) – A blank-field report (e.g., matrix or summary report) quantifies missing data for key fields (e.g., "30% of Contacts lack Industry").
Example: Use COUNT() and BLANKVALUE() in a report formula.

Why Not the Others?

Option A (❌ Less Actionable) – "Last updated" time doesn’t indicate if data is complete or consistent.
Option D (❌ Premature) – Default values should only be applied after assessing gaps (risks masking bad data).

Universal Containers wants to implement a data -quality process to monitor the data that users are manually entering into the system through the Salesforce UI. Which approach should the architect recommend?


A.

Allow users to import their data using the Salesforce Import tools.


B.

Utilize a 3rd -party solution from the AppExchange for data uploads.


C.

Utilize an app from the AppExchange to create data -quality dashboards.


D.

Use Apex to validate the format of phone numbers and postal codes.





C.
  

Utilize an app from the AppExchange to create data -quality dashboards.



Explanation:

✅ C. Utilize an AppExchange app for data-quality dashboards
This is the best approach for monitoring data quality.
Many AppExchange apps offer:
1. Data completeness dashboards
2. Field-level data validation tracking
3. Consistency checks
4. Trend analysis over time
These tools help visualize and report on data quality issues, making them ideal for identifying and improving user-entered data through the Salesforce UI.

Why Not the Others?

❌ A. Allow users to import data using Salesforce Import tools
This doesn’t address data quality monitoring; it’s a data entry method.
It could actually increase risk of bad data if not carefully controlled.

❌ B. Utilize a 3rd-party solution for data uploads
Again, this focuses on data loading, not monitoring.
While some 3rd-party tools offer cleansing, this doesn’t directly relate to user-entered UI data monitoring.

❌ D. Use Apex to validate phone/postal code formats
Apex validation is helpful for real-time field-level enforcement, but:
It’s narrow in scope (specific fields only).
It doesn’t provide monitoring, reporting, or dashboards.
It doesn't help track broader data quality metrics.

Universal Container (UC) has around 200,000 Customers (stored in Account object). They get 1 or 2 Orders every month from each Customer. Orders are stored in a custom object called "Order c"; this has about 50 fields. UC is expecting a growth of 10% year -over -year. What are two considerations an architect should consider to improve the performance of SOQL queries that retrieve data from the Order _c object? (Choose 2 answers)


A.

Use SOQL queries without WHERE conditions.


B.

Work with Salesforce Support to enable Skinny Tables.


C.

Reduce the number of triggers on Order _c object.


D.

Make the queries more selective using indexed fields.





B.
  

Work with Salesforce Support to enable Skinny Tables.



D.
  

Make the queries more selective using indexed fields.



Explanation:

✅ B. Enable Skinny Tables
Skinny Tables are a Salesforce-managed optimization that improves read/query performance on large objects by storing frequently queried fields in a smaller, more efficient table.
Ideal when you have objects with many fields (like Order__c with 50+ fields) but only need to query a subset.
You must request them through Salesforce Support.

✅ D. Use selective queries with indexed fields
Salesforce optimizes SOQL queries by using indexes.
Making queries selective means using WHERE clauses that filter on indexed and highly selective fields, reducing the number of records scanned.
This is especially critical as the data volume grows (with 200,000 customers and millions of order records).

Why Not the Others?

❌ A. Use SOQL queries without WHERE conditions
This is the opposite of good practice. Queries without WHERE clauses are non-selective and will result in full table scans, which can hit governor limits or cause timeouts.

❌ C. Reduce number of triggers on Order__c
While too many triggers can impact DML performance, this is not directly related to SOQL query performance.
Also, it's a development hygiene concern rather than a data access optimization.

Universal Containers (UC) has a very large and complex Salesforce org with hundreds of validation rules and triggers. The triggers are responsible for system updates and data manipulation as records are created or updates by users. A majority of the automation tool within UC’’ org were not designed to run during a data load. UC is importing 100,000 records into Salesforce across several objects over the weekend. What should a data architect do to mitigate any unwanted results during the import?


A. Ensure validation rules, triggers and other automation tools are disabled.


B. Ensure duplication and matching rules and defined.


C. Import the data in smaller batches over a 24-hour period.


D. Bulkily the trigger to handle import leads.





A.
  Ensure validation rules, triggers and other automation tools are disabled.

Explanation:

Option A (✔️ Critical for Bulk Loads) – Disabling validation rules, triggers, and workflows during bulk data loads prevents:
1. Unintended automation (e.g., trigger-driven updates skewing data).
2. Validation errors blocking records (e.g., required field checks).
3. Performance bottlenecks from cascading automation.
4. How to disable: Use "Bulk API" with options to bypass triggers or deactivate automation temporarily.

Why Not the Others?

Option B (❌ Off-Topic) – Duplicate rules help prevent dupes but don’t address automation conflicts.
Option C (❌ Inefficient) – Smaller batches reduce errors but don’t solve automation interference.
Option D (❌ Risky) – Bulkifying triggers is a general best practice, but it doesn’t prevent unwanted automation during imports.

Universal Containers (UC) wants to store product data in Salesforce, but the standard Product object does not support the more complex hierarchical structure which is currently being used in the product master system. How can UC modify the standard Product object model to support a hierarchical data structure in order to synchronize product data from the source system to Salesforce?


A.

Create a custom lookup filed on the standard Product to reference the child record in the hierarchy.


B.

Create a custom lookup field on the standard Product to reference the parent record in the hierarchy.


C.

Create a custom master-detail field on the standard Product to reference the child record in the hierarchy.


D.

Create an Apex trigger to synchronize the Product Family standard picklist field on the Product object.





B.
  

Create a custom lookup field on the standard Product to reference the parent record in the hierarchy.



Explanation:

Option B (✔️ Best Practice) – A custom lookup field on the Product2 object (e.g., Parent_Product__c) allows:
1. Hierarchical relationships (e.g., "Laptop → Battery → Charger").
2. Flexibility: Unlike master-detail, lookup relationships don’t cascade delete and allow products to exist independently.
3. Sync compatibility: Matches how most external product master systems structure hierarchies (parent-child references).

Why Not the Others?

Option A (❌ Backward Logic) – A child-reference lookup would require multiple fields (e.g., Child_Product_1__c, Child_Product_2__c), which is impractical.
Option C (❌ Overly Restrictive) – Master-detail fields enforce ownership/cascade deletion, which is unnecessary for product hierarchies.
Option D (❌ Irrelevant) – The Product Family picklist is for categorization, not hierarchical relationships.

Universal Containers (UC) is concerned that data is being corrupted daily either through negligence or maliciousness. They want to implement a backup strategy to help recover any corrupted data or data mistakenly changed or even deleted. What should the data architect consider when designing a field -level audit and recovery plan?


A.

Reduce data storage by purging old data.


B.

Implement an AppExchange package.


C.

Review projected data storage needs.


D.

Schedule a weekly export file.





B.
  

Implement an AppExchange package.



Explanation:

✅ B. Implement an AppExchange package

To track field-level changes and support data recovery, you need a comprehensive audit and backup solution.
Several AppExchange packages (like OwnBackup, Spanning, or Odaseva) offer:
1. Automated daily backups
2. Field-level change tracking
3. Restore capabilities (record-level and field-level)
4. Audit history beyond Salesforce’s native field history limitations
This is the most scalable, automated, and reliable approach for enterprises concerned about data corruption or loss.

Why Not the Others?

❌ A. Reduce data storage by purging old data
While managing storage is important, purging data does not help with recovery or auditing.
In fact, it can make things worse if critical data is removed before being backed up.

❌ C. Review projected data storage needs
Important for long-term planning, but it doesn’t provide any recovery or auditing capability.
It’s a capacity exercise, not a backup strategy.

❌ D. Schedule a weekly export file
Native Salesforce weekly data export provides only a basic backup.
It does not track field-level changes, deletions, or provide a quick restore mechanism.
Also, weekly frequency may be insufficient for detecting or responding to daily corruption.

Ursa Major Solar's legacy system has a quarterly accounts receivable report that compiles data from the following:
- Accounts
- Contacts
- Opportunities
- Orders
- Order Line Items
Which issue will an architect have when implementing this in Salesforce?


A.

Custom report types CANNOT contain Opportunity data.


B.

Salesforce does NOT support Orders or Order Line Items.


C.

Salesforce does NOT allow more than four objects in a single report type.


D.

A report CANNOT contain data from Accounts and Contacts.





C.
  

Salesforce does NOT allow more than four objects in a single report type.



Explanation:

Option C (✔️ True Limitation) – Salesforce report types can include a maximum of four objects (due to joins in the underlying SQL query).
Example: You could link Account → Opportunity → Order → Order Line Item, but cannot add Contact as a fifth object.

Why Not the Others?

Option A (❌ False) – Custom report types can include Opportunity data (e.g., Account + Opportunity).
Option B (❌ False) – Salesforce supports Orders (Order object) and Order Line Items (OrderItem object) (B2B/B2C).
Option D (❌ False) – Reports can combine Account and Contact data (e.g., "Accounts with Contacts" report type).

Universal Containers (UC) plans to implement consent management for its customers to be compliant with General Data Protection Regulation (GDPR). UC has the following requirements:
UC uses Person Accounts and Contacts in Salesforce for its customers.
Data Protection and Privacy is enabled in Salesforce.
Consent should be maintained in both these objects.
UC plans to verify the consent provided by customers before contacting them through email or phone.
Which option should the data architect recommend to implement these requirements?


A.

Configure custom fields in Person Account and Contact to store consent provided by customers, and validate consent against the fields.


B.

Build Custom object to store consent information in Person Account and Contact, validate against this object before contacting customers.


C.

Use the Consent Management Feature to validate consent provide under the person Account and Contact that is provided by the customer.


D.

Delete contact information from customers who have declined consent to be contacted.





C.
  

Use the Consent Management Feature to validate consent provide under the person Account and Contact that is provided by the customer.



Explanation:

Option C (✔️ Best Practice) – Salesforce’s native Consent Management feature (part of Data Protection & Privacy) is designed for GDPR compliance and:
1. Centralizes consent tracking for Person Accounts and Contacts (using the Individual object).
2. Automates validation (e.g., checks ConsentDate and ExpirationDate before emails/calls).
3. Integrates with Marketing Cloud and Service Cloud for enforcement.

Why Not the Others?

Option A (❌ Manual & Risky) – Custom fields work but lack automation (e.g., no expiration checks) and require custom code for validation.
Option B (❌ Redundant) – A custom object duplicates the Individual object’s functionality and adds maintenance overhead.
Option D (❌ Non-Compliant) – Deleting data violates GDPR’s "right to access" (records must be retained for audits).

Northern Trail Outfitters (NTO) has recently implemented Salesforce to track opportunities across all their regions. NTO sales teams across all regions have historically managed their sales process in Microsoft Excel. NTO sales teams are complaining that their data from the Excel files were not migrated as part of the implementation and NTO is now facing low Salesforce adoption. What should a data architect recommend to increase Salesforce adoption?


A. Use the Excel connector to Salesforce to sync data from individual Excel files.


B. Define a standard mapping and train sales users to import opportunity data


C. Load data in external database and provide access to database to sales users.


D. Create a chatter group and upload all Excel files to the group.





B.
  Define a standard mapping and train sales users to import opportunity data

Explanation:

Option B (✔️ Sustainable Solution) – This approach:
1. Standardizes the process: Provides clear guidelines for mapping Excel columns to Salesforce fields (e.g., "Excel 'Deal Size' → Salesforce 'Amount'").
2. Empowers users: Training sales teams to self-import data (via Data Import Wizard or Data Loader) reduces dependency on IT.
3. Encourages adoption: Users retain control over their data while transitioning to Salesforce.

Why Not the Others?

Option A (❌ Fragile) – Excel connectors require manual file maintenance and risk data silos (e.g., outdated/local Excel files).
Option C (❌ Counterproductive) – External databases defeat the purpose of Salesforce and create new silos.
Option D (❌ Inefficient) – Uploading Excel files to Chatter does not migrate data to Salesforce objects.

NTO has decided that it is going to build a channel sales portal with the following requirements:
1.External resellers are able to authenticate to the portal with a login.
2.Lead data, opportunity data and order data are available to authenticated users.
3.Authenticated users many need to run reports and dashboards.
4.There is no need for more than 10 custom objects or additional file storage.
Which community cloud license type should a data architect recommend to meet the portal requirements?


A.

Customer community.


B.

Lightning external apps starter.


C.

Customer community plus.


D.

Partner community.





D.
  

Partner community.



Explanation:

Option D (✔️ Best Fit) – Partner Community licenses are designed for external resellers and support:
1. Authentication: Secure logins for partners/resellers.
2. Access to Leads, Opportunities, Orders: Full CRUD access (critical for channel sales).
3. Reports & Dashboards: Run and customize reports (not available in lighter licenses).
4. Custom Objects: Supports up to 10 custom objects (matches requirements).

Why Not the Others?

Option A (❌ Too Limited) – Customer Community lacks Opportunity/Order access and reporting features.
Option B (❌ Not a Portal) – Lightning External Apps Starter is for limited API access, not full portal UIs.
Option C (❌ Partial Fit) – Customer Community Plus grants more access than standard Customer Community but still lacks Opportunity/Order access for partners.

A large retail B2C customer wants to build a 360 view of its customer for its call center agents. The customer interaction is currently maintained in the following system:
1. Salesforce CRM
3. Customer Master Data management (MDM)
4. Contract Management system
5. Marketing solution
What should a data architect recommend that would help upgrade uniquely identify customer across multiple systems:


A.

Store the salesforce id in all the solutions to identify the customer.


B.

Create a custom object that will serve as a cross reference for the customer id.


C.

Create a customer data base and use this id in all systems.


D.

Create a custom field as external id to maintain the customer Id from the MDM solution.





D.
  

Create a custom field as external id to maintain the customer Id from the MDM solution.



Explanation:

Option D (✔️ Best Practice) – Using the MDM’s customer ID as an external ID in Salesforce ensures:
1. Single Source of Truth: MDM is the authoritative system for customer identity.
2. Cross-System Sync: Salesforce and other systems (contracts, marketing) can reference the same ID.
3. Integration Flexibility: Enables easy matching during data loads (e.g., using upsert with the external ID).

Why Not the Others?

Option A (❌ Salesforce-Centric) – Storing Salesforce IDs in other systems creates dependency on Salesforce (MDM should own the master ID).
Option B (❌ Overhead) – A cross-reference object adds complexity and risks sync delays.
Option C (❌ Redundant) – Creating a new database for IDs contradicts the purpose of an existing MDM.


Page 2 out of 22 Pages
Previous