Data-Architect Practice Test Questions

257 Questions


A company wants to document the data architecture of a Salesforce organization. What are two valid metadata types that should be included? (Choose two.)


A.

RecordType


B.

Document


C.

CustomField


D.

SecuritySettings





A.
  

RecordType



C.
  

CustomField



Explanation:

✅ A. RecordType – Defines different business processes, picklist values, and page layouts for the same object, making it crucial for understanding data structure and behavior.
✅ C. CustomField – Represents custom data fields created in Salesforce, which are fundamental to documenting the organization's unique data model.

Why Others Fail:

❌ B. Document: While documents can store information, they are not metadata types that define Salesforce's data architecture.
❌ D. SecuritySettings: Though important for access control, security settings are more about permissions than data structure.

Due to security requirements, Universal Containers needs to capture specific user actions, such as login, logout, file attachment download, package install, etc. What is the recommended approach for defining a solution for this requirement?


A.

Use a field audit trail to capture field changes.


B.

Use a custom object and trigger to capture changes.


C.

Use Event Monitoring to capture these changes.


D.

Use a third-party AppExchange app to capture changes.





C.
  

Use Event Monitoring to capture these changes.



Explanation:

Event Monitoring is Salesforce's native solution for tracking user activity logs, including logins, logouts, file downloads, package installs, and other security-related events. It provides detailed API-based analytics without requiring custom code or third-party tools.

Why Others Fail:

A. Field Audit Trail → Only tracks field-level changes, not user actions like logins or file downloads.
B. Custom Object & Trigger → Requires manual development, may miss critical system events, and is harder to maintain.
D. Third-Party AppExchange App → Adds unnecessary cost & complexity when Salesforce already offers Event Monitoring.

DreamHouse Realty has a Salesforce deployment that manages Sales, Support, and Marketing efforts in a multi-system ERP environment. The company recently reached the limits of native reports and dashboards and needs options for providing more analytical insights. What are two approaches an Architect should recommend? (Choose two.)


A.

Weekly Snapshots


B.

Einstein Analytics


C.

Setup Audit Trails


D.

AppExchange Apps





B.
  

Einstein Analytics



D.
  

AppExchange Apps



Explanation:

B. Einstein Analytics (now Tableau CRM)
Advanced Analytics: Provides AI-powered dashboards, predictive insights, and interactive data exploration beyond standard reports.
Multi-System Integration: Pulls data from Salesforce and external ERP systems into a unified analytics platform.

D. AppExchange Apps
Pre-Built Solutions: Apps like Tableau, Power BI, or Domo offer specialized reporting/analytics without custom development.
ERP Integration: Many apps connect natively to multi-system environments (e.g., SAP, Oracle).

Why Others Fail:

A. Weekly Snapshots: Only captures historical data at fixed intervals—no real-time insights or advanced analytics.
C. Setup Audit Trails: Tracks admin changes, not business data for analytical use.

A Salesforce customer has plenty of data storage. Sales Reps are complaining that searches are bringing back old records that aren't relevant any longer. Sales Managers need the data for their historical reporting. What strategy should a data architect use to ensure a better user experience for the Sales Reps?


A.

Create a Permission Set to hide old data from Sales Reps.


B.

Use Batch Apex to archive old data on a rolling nightly basis.


C.

Archive and purge old data from Salesforce on a monthly basis.


D.

Set data access to Private to hide old data from Sales Reps.





B.
  

Use Batch Apex to archive old data on a rolling nightly basis.



Explanation:

✅ B. Use Batch Apex to archive old data
This approach helps maintain historical data needed by Sales Managers while reducing clutter for Sales Reps.
Archiving involves moving older, less relevant records to:
1. A custom object
2. A different storage layer (e.g., Big Objects or external system)
Batch Apex is ideal for processing large volumes of data in the background, and running it nightly ensures data is continuously maintained.

Why Others Fail:

A. Permission Sets / D. Private Data Access: Hiding data doesn’t remove it from search indexes, so irrelevant records still appear.
C. Monthly Purges: Deleting data risks losing historical insights Sales Managers need.

UC has multiple SF orgs that are distributed across regional branches. Each branch stores local customer data inside its org’s Account and Contact objects. This creates a scenario where UC is unable to view customers across all orgs. UC has an initiative to create a 360-degree view of the customer, as UC would like to see Account and Contact data from all orgs in one place. What should a data architect suggest to achieve this 360-degree view of the customer?


A.

Consolidate the data from each org into a centralized datastore


B.

Use Salesforce Connect’s cross-org adapter.


C.

Build a bidirectional integration between all orgs.


D.

Use an ETL tool to migrate gap Accounts and Contacts into each org.





A.
  

Consolidate the data from each org into a centralized datastore



Explanation:

When you have customer data spread across multiple Salesforce orgs, the best way to create a unified customer view is to extract the data into a centralized data warehouse or data lake. This allows for holistic reporting, avoids the complexity of real-time cross-org integration, and supports advanced analytics. Tools like Tableau, Snowflake, or AWS Redshift can be used to create this centralized view.

Universal Containers is setting up an external Business Intelligence (BI) system and wants to extract 1,000,000 Contact records. What should be recommended to avoid timeouts during the export process?


A.

Use the SOAP API to export data.


B.

Utilize the Bulk API to export the data.


C.

Use GZIP compression to export the data.


D.

Schedule a Batch Apex job to export the data.





B.
  

Utilize the Bulk API to export the data.



Explanation:

The Bulk API is designed for large data volumes. It's asynchronous and processes records in batches (up to 10,000 per batch), helping avoid governor limits and timeouts common with the REST or SOAP APIs. It’s the most efficient and scalable method for exporting millions of records.

Northern Trail Outfitters needs to implement an archive solution for Salesforce data. This archive solution needs to help NTO do the following:
1. Remove outdated Information not required on a day-to-day basis.
2. Improve Salesforce performance.
Which solution should be used to meet these requirements?


A.

Identify a location to store archived data and use scheduled batch jobs to migrate and purge the aged data on a nightly basis,


B.

Identify a location to store archived data, and move data to the location using a time based workflow.


C.

Use a formula field that shows true when a record reaches a defined age and use that field to run a report and export a report into SharePoint.


D.

Create a full copy sandbox, and use it as a source for retaining archived data.





A.
  

Identify a location to store archived data and use scheduled batch jobs to migrate and purge the aged data on a nightly basis,



Explanation:

Scheduled batch jobs (A) are the proper archival method because they can systematically identify and move outdated records to a separate storage location (like BigObjects) on a nightly basis. This maintains data accessibility for reporting while improving system performance. Workflows (B) can't handle large data volumes, manual exports (C) aren't automated, and sandboxes (D) aren't designed for archival purposes.

Universal Containers (UC) has a data model as shown in the image. The Project object has a private sharing model, and it has Roll -Up summary fields to calculate the number of resources assigned to the project, total hours for the project, and the number of work items associated to the project. What should the architect consider, knowing there will be a large amount of time entry records to be loaded regularly from an external system into Salesforce.com?


A.

Load all data using external IDs to link to parent records.


B.

Use workflow to calculate summary values instead of Roll -Up.


C.

Use triggers to calculate summary values instead of Roll -Up.


D.

Load all data after deferring sharing calculations.





D.
  

Load all data after deferring sharing calculations.



Explanation:

Private sharing models can trigger expensive sharing rule recalculations during data loads. By deferring sharing calculations (a feature that can be enabled), you significantly improve load performance. Once data is loaded, you can trigger recalculation manually.

Universal Containers has a legacy system that captures Conferences and Venues. These Conferences can occur at any Venue. They create hundreds of thousands of Conferences per year. Historically, they have only used 20 Venues. Which two things should the data architect consider when denormalizing this data model into a single Conference object with a Venue picklist? (Choose 2 answers)


A.

Limitations on master -detail relationships.


B.

Org data storage limitations.


C.

Bulk API limitations on picklist fields.


D.

Standard list view in -line editing.





B.
  

Org data storage limitations.



D.
  

Standard list view in -line editing.



Explanation:

When converting to a picklist, consider storage limits (B) since picklists consume less space than thousands of duplicate venue records, and list view editing (D) because picklists allow faster in-line updates than lookups. Master-detail limitations (A) don't apply here, and Bulk API (C) handles picklists normally. This optimization balances usability with system performance.

A large telecommunication provider that provides internet services to both residence and business has the following attributes:
A customer who purchases its services for their home will be created as an Account in Salesforce.
Individuals within the same house address will be created as Contact in Salesforce.
Businesses are created as Accounts in Salesforce.
Some of the customers have both services at their home and business.
What should a data architect recommend for a single view of these customers without creating multiple customer records?


A.

Customers are created as Contacts and related to Business and Residential Accounts using the Account Contact Relationships.


B.

Customers are created as Person Accounts and related to Business and Residential Accounts using the Account Contact relationship.


C.

Customer are created as individual objects and relate with Accounts for Business and Residence accounts.


D.

Costumers are created as Accounts for Residence Account and use Parent Account to relate Business Account.





A.
  

Customers are created as Contacts and related to Business and Residential Accounts using the Account Contact Relationships.



Explanation:

Account Contact Relationships (ACR) allow a single Contact to be related to multiple Accounts. This is ideal for modeling scenarios where individuals are connected to both business and residential accounts, maintaining a single source of truth per person.

NTO need to extract 50 million records from a custom object everyday from its Salesforce org. NTO is facing query timeout issues while extracting these records. What should a data architect recommend in order to get around the time out issue?


A.

Use a custom auto number and formula field and use that to chunk records while extracting data.


B.

The REST API to extract data as it automatically chunks records by 200.


C.

Use ETL tool for extraction of records.


D.

Ask SF support to increase the query timeout value.





A.
  

Use a custom auto number and formula field and use that to chunk records while extracting data.



Explanation:

Querying 50 million records at once often leads to timeouts. Chunking by a field like AutoNumber or CreatedDate enables incremental and efficient extraction. This helps avoid governor limits and keeps queries performant.

NTO (Northern Trail Outlets) has a complex Salesforce org which has been developed over past 5 years. Internal users are complaining abt multiple data issues, including incomplete and duplicate data in the org. NTO has decided to engage a data architect to analyze and define data quality standards. Which 3 key factors should a data architect consider while defining data quality standards? Choose 3 answers:


A.

Define data duplication standards and rules


B.

Define key fields in staging database for data cleansing


C.

Measure data timeliness and consistency


D.

Finalize an extract transform load (ETL) tool for data migration


E.

Measure data completeness and accuracy





A.
  

Define data duplication standards and rules



C.
  

Measure data timeliness and consistency



E.
  

Measure data completeness and accuracy



Explanation:

The architect should focus on duplication rules (A), timeliness/consistency (C), and completeness/accuracy (E) as these represent core data quality dimensions. Staging fields (B) and ETL tools (D) are implementation methods rather than quality standards. These three areas address the root causes of NTO's data issues.


Page 4 out of 22 Pages
Previous