Salesforce-Platform-Data-Architect Practice Test Questions

257 Questions


Universal Container has implemented Sales Cloud to manage patient and related health records. During a recent security audit of the system it was discovered that same standard and custom fields need to encrypted. Which solution should a data architect recommend to encrypt existing fields?


A.

Use Apex Crypto Class encrypt customer and standard fields.


B.

Implement classic encryption to encrypt custom and standard fields.


C.

Implement shield platform encryption to encrypt and standard fields


D.

Expert data out of Salesforce and encrypt custom and standard fields.





C.
  

Implement shield platform encryption to encrypt and standard fields



Explanation:

✅ C. Implement Shield Platform Encryption
Shield Platform Encryption is designed for encrypting both standard and custom fields at rest. It works natively in Salesforce and supports compliance needs like HIPAA and GDPR.

❌ A. Apex Crypto Class
Not usable on standard fields and not integrated with the Salesforce platform security model.

❌ B. Classic Encryption
Limited to custom text fields and lacks flexibility or compatibility with modern Salesforce features.

❌ D. Export and encrypt
This approach is insecure, breaks platform trust, and is not real-time. It doesn’t protect data in Salesforce.

As part of a phased Salesforce rollout. there will be 3 deployments spread out over the year. The requirements have been carefully documented. Which two methods should an architect use to trace back configuration changes to the detailed requirements? Choose 2 answers


A.

Review the setup audit trail for configuration changes.


B.

Put the business purpose in the Description of each field.


C.

Maintain a data dictionary with the justification for each field.


D.

Use the Force.com IDE to save the metadata files in source control.





B.
  

Put the business purpose in the Description of each field.



C.
  

Maintain a data dictionary with the justification for each field.



Explanation:

✅ B (Field Descriptions): Documents business purpose directly in setup metadata (visible to admins).

✅ C (Data Dictionary): External tracker maps fields to requirements/justifications.

❌ A: Audit trails log changes but not reasons.

❌ D: Source control tracks code, not business context.

A large automobile manufacturer has decided to use Salesforce as its CRM. It needs to maintain the following dealer types in their CRM:
Local dealers
Regional distributor
State distributor
Service dealer
The attributes are different for each of the customer types. The CRM users should be allowed to enter only attributes related to the customer types. The processes and business rules for each of the customer types could be different. How should the different dealers be maintained in Salesforce?


A.

Use Accounts for dealers, and create record types for each of the dealer types.


B.

Create dealers as Accounts, and build custom views for each of the dealer types.


C.

Use Accounts for dealers and custom picklist field for each of the dealer types


D.

Create Custom objects for each dealer types and custom fields for dealer attributes.





A.
  

Use Accounts for dealers, and create record types for each of the dealer types.



Explanation:

✅ A. Use Accounts with record types for each dealer type
This allows shared functionality across dealers while enabling customization (layouts, picklists, processes) per dealer type. It supports reporting, security, and automation out of the box.

❌ B. Custom views only
This limits extensibility—can’t change page layout, validation rules, or automation per dealer type.

❌ C. Picklist for type
Insufficient flexibility. A picklist does not support layout or process differentiation like record types.

❌ D. Custom objects per dealer
Overkill. This creates redundant structures and complicates reporting and maintenance.

Which API should a data architect use if exporting 1million records from Salesforce?


A.

Bulk API


B.

REST API


C.

Streaming API


D.

SOAP API





A.
  

Bulk API



Explanation:

✅ A. Bulk API
Designed for handling large data volumes. It allows batching, is optimized for speed, and minimizes governor limits, making it the best tool for exporting millions of records.

❌ B. REST API
Not optimized for large volumes. Record limits and rate limits make it inefficient for 1M+ records.

❌ C. Streaming API
Used for event notifications, not data export.

❌ D. SOAP API
Slower and subject to strict limits. Not ideal for high-volume data operations.

Universal Containers has successfully migrated 50 million records into five different objects multiple times in a full copy sandbox. The Integration Engineer wants to re-run the test again a month before it goes live into Production. What is the recommended approach to re-run the test?


A.

Truncate all 5 objects quickly and re-run the data migration test.


B.

Refresh the full copy sandbox and re-run the data migration test.


C.

Hard delete all 5 objects’ data and re-run the data migration test.


D.

Truncate all 5 objects and hard delete before running the migration test.





B.
  

Refresh the full copy sandbox and re-run the data migration test.



Explanation:

Refreshing the sandbox:
1. Resets to a clean production copy
2. Automates data purge (avoids manual deletion limits)
3. Tests against current org state

Rejected Options:

❌ A/C/D: Manual deletion risks partial data or governor limits.

Universal Containers is creating a new B2C service offering for consumers to ship goods across continents. This is in addition to their well-established B2B offering. Their current Salesforce org uses the standard Account object to track B2B customers. They are expecting to have over 50,000,000 consumers over the next five years across their 50 business regions. B2C customers will be individuals. Household data is not required to be stored. What is the recommended data model for consumer account data to be stored in Salesforce?


A.

Use the Account object with Person Accounts and a new B2C page layout.


B.

Use the Account object with a newly created Record Type for B2C customers.


C.

Create a new picklist value for B2C customers on the Account Type field.


D.

Use 50 umbrella Accounts for each region, with customers as associated Contacts.





A.
  

Use the Account object with Person Accounts and a new B2C page layout.



Explanation:

✅ A. Use Person Accounts with a B2C layout
Person Accounts are built for individual customers and are scalable to tens of millions. With appropriate indexing and partitioning, they suit B2C use cases where the "Contact is the Account".

❌ B. New Record Type on Account
Does not allow individual-centric data model. Not suitable for representing consumers as standalone entities.

❌ C. Picklist field
Too limited and doesn’t provide layout/process/data separation for B2C.

❌ D. Umbrella Accounts + Contacts
Artificial hierarchy and not scalable. This breaks Salesforce’s conceptual model and complicates sharing/reporting.

Two million Opportunities need to be loaded in different batches into Salesforce using the Bulk API in parallel mode. What should an Architect consider when loading the Opportunity records?


A.

Use the Name field values to sort batches.


B.

Order batches by Auto-number field.


C.

Create indexes on Opportunity object text fields.


D.

Group batches by the AccountId field.





D.
  

Group batches by the AccountId field.



Explanation:

✅ D. Group batches by the AccountId field.
When using Bulk API in parallel mode, contention on parent records (like AccountId) can cause locking issues and errors. Grouping batches by AccountId ensures that records with the same parent do not get processed in parallel, thereby reducing lock contention and improving performance.

❌ A. Use the Name field – Sorting by Name has no performance impact for bulk processing.
❌ B. Order by Auto-number field – Similar to Name, Auto-number ordering won’t reduce locking or improve performance.
❌ C. Index text fields – Indexing helps in querying, not in bulk loading.

Northern Trail Outfitters (NTO) wants to capture a list of customers that have bought a particular product. The solution architect has recommended to create a custom object for product, and to create a lookup relationship between its customers and its products. Products will be modeled as a custom object (NTO_ Product__ c) and customers are modeled as person accounts. Every NTO product may have millions of customers looking up a
single product, resulting in a lookup skew. What should a data architect suggest to mitigate Issues related to lookup skew?


A.

Create multiple similar products and distribute the skew across those products.


B.

Change the lookup relationship to master-detail relationship.


C.

Create a custom object to maintain the relationship between products and customers.


D.

Select Clear the value of this field option while configuring the lookup relationship.





C.
  

Create a custom object to maintain the relationship between products and customers.



Explanation:

✅ Correct Answer: C. Create a custom object to maintain the relationship between products and customers.
Modeling the many-to-many relationship with a junction object (like ProductCustomer__c) breaks the direct lookup from millions of customers to one product, effectively eliminating lookup skew.

❌ A. Multiple product records – Splitting a product into duplicates just to avoid skew creates data integrity issues.

❌ B. Master-detail – Master-detail still enforces ownership, potentially leading to the same skew issue.

❌ D. “Clear the value” option – Only applies when deleting the referenced record, not for preventing skew.

Universal Containers (UC) has lead assignment rules to assign leads to owners. Leads not routed by assignment rules are assigned to a dummy user. Sales rep are complaining of high load times and issues with accessing leads assigned to the dummy user. What should a data architect recommend to solve these performance issues?


A.

Assign dummy user last role in role hierarchy


B.

Create multiple dummy user and assign leads to them


C.

Assign dummy user to highest role in role hierarchy


D.

Periodically delete leads to reduce number of leads





B.
  

Create multiple dummy user and assign leads to them



Explanation:

✅ Correct Answer: B. Create multiple dummy users and assign leads to them.
When too many records are owned by a single user, performance degradation (called ownership skew) can occur. Distributing leads across multiple dummy users spreads the load and improves performance.

❌ A. Lowest role – Doesn’t help with ownership-based sharing calculations.

❌ C. Highest role – Could increase visibility and sharing recalculations, worsening the issue.

❌ D. Deleting leads – Doesn’t solve root cause and may lose valuable data.

Northern Trail Outfitters (NTO) wants to start a loyalty program to reward repeat customers. The program will track every item a customer has bought and grants them points for discounts. The following conditions will exist upon implementation:
Data will be used to drive marketing and product development initiatives.
NTO estimates that the program will generate 100 million rows of date monthly.
NTO will use Salesforce's Einstein Analytics and Discovery to leverage their data and make business and marketing decisions. What should the Data Architect do to store, collect, and use the reward program data?


A.

Create a custom big object in Salesforce which will be used to capture the Reward Program data for consumption by Einstein.


B.

Have Einstein connect to the point of sales system to capture the Reward Program data.


C.

Create a big object in Einstein Analytics to capture the Loyalty Program data.


D.

Create a custom object in Salesforce that will be used to capture the Reward Program data.





A.
  

Create a custom big object in Salesforce which will be used to capture the Reward Program data for consumption by Einstein.



Explanation:

✅ A. Create a custom big object in Salesforce
Big Objects are the best choice for storing high-volume, immutable data like reward transactions. They're optimized for storage and can be queried via SOQL (with indexed fields), and are compatible with Einstein Analytics.

❌ B. Connect Einstein directly – Not scalable or reliable for data storage and historical queries.

❌ C. Big Objects don’t exist in Einstein Analytics – They exist in the core platform, not Einstein.

❌ D. Custom object – Not scalable for 100M+ rows/month and hits storage limits.

Universal Containers (UC) is a business that works directly with individual consumers (B2C). They are moving from a current home-grown CRM system to Salesforce. UC has about one million consumer records. What should the architect recommend for optimal use of Salesforce functionality and also to avoid data loading issues?


A.

Create a Custom Object Individual Consumer c to load all individual consumers.


B.

Load all individual consumers as Account records and avoid using the Contact object.


C.

Load one Account record and one Contact record for each individual consumer.


D.

Create one Account and load individual consumers as Contacts linked to that one Account.





C.
  

Load one Account record and one Contact record for each individual consumer.



Explanation:

Person Accounts (Account + Contact merged):
1. Optimize B2C modeling with dedicated layouts.
2. Avoid "Contact sprawl" (Option D’s single Account with 1M Contacts creates roll-up skew).
3. Enable standard features (e.g., Opportunities).

Rejected options:

A: Custom objects lose native CRM functions.
B/D: Bloat Accounts or create hierarchy issues.

Universal Containers (UC) uses Salesforce for tracking opportunities (Opportunity). UC uses an internal ERP system for tracking deliveries and invoicing. The ERP system supports SOAP API and OData for bi-directional integration between Salesforce and the ERP system. UC has about one million opportunities. For each opportunity, UC sends 12 invoices, one per month. UC sales reps have requirements to view current invoice status and invoice amount from the opportunity page. When creating an object to model invoices, what should the architect recommend, considering performance and data storage space?


A.

Use Streaming API to get the current status from the ERP and display on the Opportunity page.


B.

Create an external object Invoice _x with a Lookup relationship with Opportunity.


C.

Create a custom object Invoice _c with a master -detail relationship with Opportunity.


D.

Create a custom object Invoice _c with a Lookup relationship with Opportunity.





B.
  

Create an external object Invoice _x with a Lookup relationship with Opportunity.



Explanation:

✅ B. Create an external object Invoice__x with a Lookup relationship to Opportunity External Objects (via Salesforce Connect) let you access invoice data in real-time from ERP without consuming storage. With 12 invoices per opportunity, storing all in Salesforce (12 million+ records) would be inefficient.

❌ A. Streaming API – Good for real-time updates but doesn’t store or display data.

❌ C & D. Custom object – Would consume a lot of storage and create maintenance overhead.


Page 8 out of 22 Pages
Previous