Universal Container has implemented Sales Cloud to manage patient and related health records. During a recent security audit of the system it was discovered that same standard and custom fields need to encrypted. Which solution should a data architect recommend to encrypt existing fields?
A.
Use Apex Crypto Class encrypt customer and standard fields.
B.
Implement classic encryption to encrypt custom and standard fields.
C.
Implement shield platform encryption to encrypt and standard fields
D.
Expert data out of Salesforce and encrypt custom and standard fields.
Implement shield platform encryption to encrypt and standard fields
Explanation:
This question evaluates the understanding of native, scalable encryption solutions on the Salesforce platform, particularly for sensitive data like health records.
Why C is Correct: Shield Platform Encryption is Salesforce's native, managed solution for encrypting data at rest. It provides:
➡️ Comprehensive Coverage: It can encrypt a wide range of standard and custom field types (text, number, email, etc.) without changing how users or apps interact with the data (it's transparent encryption).
➡️ Security and Compliance: It is specifically designed to help meet stringent compliance requirements like HIPAA for healthcare data, which is implied by "patient and health records."
➡️ Manageability: It is administered through point-and-click setup in the Salesforce UI, making it manageable for administrators without code.
Why A is Incorrect (Apex Crypto Class): The Apex Crypto class is used for encrypting data in transit (e.g., in a callout) or for storing encrypted data in a text field temporarily. It is not a solution for encrypting data at rest in standard and custom fields across an entire org. It would require a massive, custom-coded rewrite of all data access patterns and is not a feasible or secure solution for this requirement.
Why B is Incorrect (Classic Encryption): "Classic encryption" is not a defined Salesforce feature. This is a distractor term.
Why D is Incorrect (Export data and encrypt): Exporting sensitive data like health records to an external system to encrypt it is a major security risk and compliance violation. It exposes the data during the export process and breaks the security and audit trail within Salesforce. The encryption must happen within the secure confines of the Salesforce platform.
Reference: Shield Platform Encryption is the standard and recommended answer for any question about encrypting field data at rest in Salesforce, especially for regulated industries.
As part of a phased Salesforce rollout. there will be 3 deployments spread out over the year. The requirements have been carefully documented. Which two methods should an architect use to trace back configuration changes to the detailed requirements? Choose 2 answers
A.
Review the setup audit trail for configuration changes.
B.
Put the business purpose in the Description of each field.
C.
Maintain a data dictionary with the justification for each field.
D.
Use the Force.com IDE to save the metadata files in source control.
Put the business purpose in the Description of each field.
Maintain a data dictionary with the justification for each field.
Explanation:
This question addresses the principles of data governance, documentation, and maintaining clarity between business requirements and technical implementation over time.
Why B is Correct: Using the Description field on every object and field is a fundamental and easily accessible form of documentation. It is stored directly in the metadata, making it visible to any admin or developer working in the org's setup. This provides immediate context for what a field is for and why it exists, directly linking it to its business requirement.
Why C is Correct: A Data Dictionary is the comprehensive, single source of truth for an organization's data assets. It provides a detailed view that goes beyond the field description, including information like data owners, data sensitivity, approved values, and the specific business requirement that justified the field's creation. This is essential for tracing changes back to original requirements, especially during a long, phased project.
Why A is Incorrect (Setup Audit Trail): The Setup Audit Trail is a fantastic tool for tracking who made a change when and what the change was. However, it does not track the why. It will show that a field was created, but it cannot trace that action back to the detailed business requirement that justified it.
Why D is Incorrect (Save metadata in source control): Using source control (e.g., Git) is a development best practice for tracking changes to metadata over time and managing deployments. However, like the Setup Audit Trail, it tracks the what and the how of a change, not the business reason why the change was made. The requirement justification must be documented within the metadata itself (Description) or in a companion document (Data Dictionary).
Reference: A core responsibility of a Data Architect is to ensure data is well-documented and traceable. This is achieved by embedding documentation in the org (Descriptions) and maintaining external governance artifacts (Data Dictionary).
A large automobile manufacturer has decided to use Salesforce as its CRM. It needs to maintain the following dealer types in their CRM:
Local dealers
Regional distributor
State distributor
Service dealer
The attributes are different for each of the customer types. The CRM users should be allowed to enter only attributes related to the customer types. The processes and business rules for each of the customer types could be different. How should the different dealers be maintained in Salesforce?
A.
Use Accounts for dealers, and create record types for each of the dealer types.
B.
Create dealers as Accounts, and build custom views for each of the dealer types.
C.
Use Accounts for dealers and custom picklist field for each of the dealer types
D.
Create Custom objects for each dealer types and custom fields for dealer attributes.
Use Accounts for dealers, and create record types for each of the dealer types.
Explanation:
Option A is the best and most scalable solution. A Record Type is specifically designed for this exact scenario. It allows you to use a single standard object, like the Account object, to represent different types of records that share a common purpose (being a "dealer" or "customer"). For each record type (Local dealer, Regional distributor, etc.), you can:
➡️ Display different fields by assigning different page layouts, ensuring users only see the attributes relevant to that dealer type.
➡️ Present different picklist values for the same field (e.g., different values for a "dealer status" picklist).
➡️ Implement different business processes or stages (e.g., a different sales process for a "regional distributor" vs. a "service dealer"). This is a fundamental best practice for data architecture on the Salesforce platform, enabling a streamlined user experience while centralizing data for reporting and analysis.
Option B is incorrect. Custom views only filter and display data that already exists; they cannot enforce different page layouts or business rules for data entry, nor can they hide fields.
Option C is incorrect. A custom picklist field to differentiate dealer types would not solve the problem of showing different attributes for each type. Users would still see all fields for all dealer types on a single page layout, leading to a cluttered interface and potential data entry errors.
Option D is incorrect. Creating a separate custom object for each dealer type is a poor data model choice. While it would allow for different fields and rules, it would create data silos. This would make reporting, automation, and overall data management across all dealers extremely difficult. For example, to find all dealers regardless of type, you would have to run a separate report for each object and combine them.
References:
Salesforce Help & Training: The official documentation on Record Types provides a clear definition and use cases that perfectly match this question's requirements.
Trailhead - Data Modeling: The "Record Types" module in Trailhead's Data Modeling trails explains how record types can be used to tailor the user experience and business processes on a single object.
Which API should a data architect use if exporting 1million records from Salesforce?
A.
Bulk API
B.
REST API
C.
Streaming API
D.
SOAP API
Bulk API
Explanation:
✅ A. Bulk API
Designed for handling large data volumes. It allows batching, is optimized for speed, and minimizes governor limits, making it the best tool for exporting millions of records.
❌ B. REST API
REST API and SOAP API are typically used for real-time, smaller-scale transactions (e.g., creating a single record, retrieving a few records) and are not optimized for millions of records. Using them for this purpose would be slow, prone to timeouts, and would likely hit governor limits.
❌ C. Streaming API
Streaming API is designed for real-time, event-based data. It's used to receive notifications when changes are made to Salesforce data (e.g., a new record is created), not for exporting existing data in bulk.
❌ D. SOAP API
Slower and subject to strict limits. Not ideal for high-volume data operations.
References:
Salesforce Developer Documentation: The Salesforce API guide explicitly states that the Bulk API is the recommended tool for dealing with large data volumes (typically more than 2,000 records).
Trailhead - Integration Basics: The modules on Salesforce APIs and integration patterns define the specific use cases for each API, with the Bulk API clearly identified for large-scale data transfers.
Universal Containers has successfully migrated 50 million records into five different objects multiple times in a full copy sandbox. The Integration Engineer wants to re-run the test again a month before it goes live into Production. What is the recommended approach to re-run the test?
A. Truncate all 5 objects quickly and re-run the data migration test.
B. Refresh the full copy sandbox and re-run the data migration test.
C. Hard delete all 5 objects’ data and re-run the data migration test.
D. Truncate all 5 objects and hard delete before running the migration test.
Explanation:
The recommended approach is to refresh the full copy sandbox because it's the only method that reliably returns the sandbox to a clean, production-like state for a high-stakes, pre-go-live test.
A full copy sandbox is an exact replica of your production org, including all data, users, and metadata. This makes it the only environment suitable for a final, end-to-end performance and data migration test.
Refreshing the sandbox completely wipes all existing data and metadata and replaces it with a fresh copy of production. This ensures the testing environment is clean and ready for the new data load, replicating the conditions of the production migration as closely as possible. It eliminates any potential residual data, configuration changes, or sharing settings from previous tests that could skew the results of the final dry run.
Why other options are incorrect:
A and D (Truncate/Hard Delete):
While truncating and hard deleting data can clear out records, they do not reset the sandbox's metadata or configuration. This could lead to an inconsistent state where previous test configurations or other changes could interfere with the final migration test. More importantly, truncating or hard deleting 50 million records can be a time-consuming and resource-intensive process in itself, making it an inefficient solution.
C (Hard delete):
Similar to truncation, hard deleting the data manually is not a reliable way to reset the environment. It does not reset the configuration or metadata, and it's not a scalable or efficient way to clear millions of records.
References:
Salesforce Sandbox Guide: Salesforce documentation on sandboxes clearly states that full sandboxes are intended for "performance testing, load testing, and staging" and that refreshing a sandbox is the primary way to get a clean copy of production.
Salesforce Data Migration Best Practices: Standard data migration methodologies emphasize testing in a pristine environment that mirrors production as closely as possible, which is the core benefit of refreshing a full copy sandbox before a critical migration event.
Universal Containers is creating a new B2C service offering for consumers to ship goods across continents. This is in addition to their well-established B2B offering. Their current Salesforce org uses the standard Account object to track B2B customers. They are expecting to have over 50,000,000 consumers over the next five years across their 50 business regions. B2C customers will be individuals. Household data is not required to be stored. What is the recommended data model for consumer account data to be stored in Salesforce?
A.
Use the Account object with Person Accounts and a new B2C page layout.
B.
Use the Account object with a newly created Record Type for B2C customers.
C.
Create a new picklist value for B2C customers on the Account Type field.
D.
Use 50 umbrella Accounts for each region, with customers as associated Contacts.
Use the Account object with Person Accounts and a new B2C page layout.
Explanation:
Option A is the standard and recommended Salesforce solution for B2C data. Person Accounts are a special type of account designed for B2C use cases where each customer is an individual, not a company. They merge the Account and Contact objects into a single record to represent an individual person. Since UC is a B2C company now, Person Accounts provide a native, out-of-the-box data model that is scalable for the expected volume of 50 million consumers. A new page layout specific to B2C will ensure users see the correct fields and information for these individual customers.
Option B is incorrect. Using a standard Account record with a record type would still require a separate Contact record for each person. This duplicates data and creates a clunky, inefficient data model for a B2C business that doesn't need to track multiple contacts per account. It is a very poor fit for this business case.
Option C is incorrect. Creating a picklist value on the Account Type field doesn't change the underlying data model. You would still have to create separate Contact records, which is inefficient for B2C.
Option D is incorrect. This approach would be a massive breach of data integrity. Storing 1 million contacts under a single umbrella account is a terrible data model that would lead to severe performance issues, reporting limitations, and a complete lack of data governance. It would make it virtually impossible to find specific customer data.
References:
Salesforce Help & Training: The official documentation on Person Accounts provides a thorough explanation of their purpose and how they are the correct solution for B2C use cases.
Trailhead - Build a B2C Solution: Trailhead modules on B2C solutions and data models explicitly promote the use of Person Accounts as the best practice for consumer-centric businesses.
Two million Opportunities need to be loaded in different batches into Salesforce using the Bulk API in parallel mode. What should an Architect consider when loading the Opportunity records?
A.
Use the Name field values to sort batches.
B.
Order batches by Auto-number field.
C.
Create indexes on Opportunity object text fields.
D.
Group batches by the AccountId field.
Group batches by the AccountId field.
Explanation:
Option A. Use the Name field values to sort batches.
→ Sorting by Name does not impact how the Bulk API processes batches.
→ The issue in parallel loads isn’t “order” — it’s locking when multiple records in different batches share the same parent. Sorting by Name doesn’t ensure related records end up in the same batch.
→ This option ignores the real performance issue: parent/child row lock contention.
Option B. Order batches by Auto-number field.
→ Auto-number fields are unique identifiers, so ordering by them just creates a sequential load.
→ Sequential loading helps in serial mode, but in parallel mode the batches are independent, so ordering doesn’t reduce contention.
→ In fact, if two auto-numbered Opportunities belong to the same Account and end up in different batches, you still risk row lock conflicts.
Option C. Create indexes on Opportunity object text fields.
→ Indexes are used to improve query performance (SOQL, reports, filters).
→ They do not help with data loading contention, which is the bottleneck here.
→ This is a red herring: indexing can help after the data is loaded, but not while inserting millions of rows in parallel.
Option D. Group batches by the AccountId field. ✅
✔️ Bulk API parallel mode processes batches simultaneously. If records from multiple batches share the same AccountId, they all try to lock the same parent row → deadlocks or failures.
✔️ By grouping Opportunities by AccountId in the same batch, you ensure all child records of an Account are loaded together, avoiding parent-level contention.
✔️ This is a known best practice for large-volume parallel loads.
📖 Reference: Salesforce Bulk API Best Practices
Northern Trail Outfitters (NTO) wants to capture a list of customers that have bought a particular product. The solution architect has recommended to create a custom object for product, and to create a lookup relationship between its customers and its products. Products will be modeled as a custom object (NTO_ Product__ c) and customers are modeled as person accounts. Every NTO product may have millions of customers looking up a
single product, resulting in a lookup skew. What should a data architect suggest to mitigate Issues related to lookup skew?
A.
Create multiple similar products and distribute the skew across those products.
B.
Change the lookup relationship to master-detail relationship.
C.
Create a custom object to maintain the relationship between products and customers.
D.
Select Clear the value of this field option while configuring the lookup relationship.
Create a custom object to maintain the relationship between products and customers.
Explanation:
Option A. Create multiple similar products and distribute the skew across those products.❌
⇒ Artificially splitting products creates data integrity issues: one real product ends up represented as several fake records.
⇒ Reporting, data management, and user experience suffer because you have to reconcile which “fake” product a customer belongs to.
⇒ This is not scalable, and it breaks the single source of truth principle.
Option B. Change the lookup relationship to master-detail relationship.❌
⇒ A master-detail does not remove the skew problem; it still centralizes millions of children under one parent.
⇒ Worse: deleting a product would cascade delete all customers — a destructive and incorrect business model.
⇒ Master-detail also places additional restrictions (e.g., the child inherits ownership and sharing from the parent). This makes performance and flexibility worse.
Option C. Create a custom object to maintain the relationship between products and customers. ✅
✔️ This is the standard junction object approach.
✔️ Instead of storing all customers directly against the product, you create an intermediate object like ProductCustomer__c.
✔️ Each record in this junction represents one relationship (Customer ↔ Product).
✔️ This avoids a single parent having millions of children and allows indexing, selective queries, and cleaner scalability.
Option D. Select "Clear the value of this field" option while configuring the lookup relationship. ❌
⇒ This setting only defines what happens if the parent record (Product) is deleted: should the child’s lookup be cleared?
⇒ It does nothing to mitigate lookup skew during day-to-day operations.
⇒ It addresses a deletion behavior, not the performance impact of millions of lookups pointing to one record.
📖 Reference: Salesforce Help – Avoiding Lookup Skew
Universal Containers (UC) has lead assignment rules to assign leads to owners. Leads not routed by assignment rules are assigned to a dummy user. Sales rep are complaining of high load times and issues with accessing leads assigned to the dummy user. What should a data architect recommend to solve these performance issues?
A. Assign dummy user last role in role hierarchy
B. Create multiple dummy user and assign leads to them
C. Assign dummy user to highest role in role hierarchy
D. Periodically delete leads to reduce number of leads
Explanation:
Option A. Assign dummy user last role in role hierarchy. 🔴
→ Role hierarchy defines record visibility for users higher up the chain.
→ Changing the dummy user’s role in the hierarchy doesn’t fix ownership skew (millions of records under one user).
→ The performance issue comes from skew, not from role visibility.
Option B. Create multiple dummy users and assign leads to them. ✅
➡️ Ownership skew occurs when a single user owns more than ~10,000 records. Performance degrades for operations like login, sharing recalculations, and reporting.
➡️ By distributing ownership across multiple users, no single user becomes a bottleneck.
➡️ This keeps the system more responsive and avoids sharing recalculation overheads tied to one owner.
Option C. Assign dummy user to highest role in role hierarchy. 🔴
→ Putting the dummy user at the top increases visibility of their records, which can actually worsen performance (more users see more records).
→ It doesn’t reduce the skew problem.
Option D. Periodically delete leads to reduce number of leads. 🔴
→ Deleting valid data is rarely a sustainable solution.
→ Leads have business value (marketing, reporting, conversion), so deletion risks data loss.
→ Even if leads are deleted, as volume grows again, skew will return.
📖 Reference: Salesforce Help – Ownership Skew
Northern Trail Outfitters (NTO) wants to start a loyalty program to reward repeat customers. The program will track every item a customer has bought and grants them points for discounts. The following conditions will exist upon implementation:
Data will be used to drive marketing and product development initiatives.
NTO estimates that the program will generate 100 million rows of date monthly.
NTO will use Salesforce's Einstein Analytics and Discovery to leverage their data and make business and marketing decisions. What should the Data Architect do to store, collect, and use the reward program data?
A.
Create a custom big object in Salesforce which will be used to capture the Reward Program data for consumption by Einstein.
B.
Have Einstein connect to the point of sales system to capture the Reward Program data.
C.
Create a big object in Einstein Analytics to capture the Loyalty Program data.
D.
Create a custom object in Salesforce that will be used to capture the Reward Program data.
Create a custom big object in Salesforce which will be used to capture the Reward Program data for consumption by Einstein.
Explanation:
This question tests the understanding of handling high-volume data and integrating it with Einstein Analytics (now Tableau CRM).
✅ Why A is Correct: The scenario states the program will generate 100 million rows of data monthly (1.2 billion rows per year). This volume is far beyond the practical storage limits of standard or custom objects in Salesforce, which are subject to data storage limits and performance degradation at this scale. A Big Object is specifically designed for this use case: it stores billions of records, has a consistent, low-performance profile for large-scale queries, and can be accessed directly by Einstein Analytics for analysis. Storing the data in a Salesforce Big Object keeps it on the platform, making it natively accessible for Einstein Analytics.
❌ Why B is Incorrect: While Einstein Analytics can connect to external systems, this approach bypasses the Salesforce platform. This makes the data unavailable for other Salesforce features (like reporting, flows, or other automations that might be needed for the loyalty program itself) and adds complexity to the integration and security model. The requirement is to "store, collect, and use" the data, implying a need for a centralized, scalable repository on the Salesforce platform.
❌ Why C is Incorrect: There is no such thing as a "big object in Einstein Analytics." Einstein Analytics is an analytics service that consumes data from various sources (like Salesforce objects, Big Objects, or external systems); it is not a primary data storage system itself. The data must be stored elsewhere first.
❌ Why D is Incorrect: A standard custom object is not suitable for this volume of data. Loading 100 million records per month would quickly consume all data storage limits and lead to severe performance issues during data insertion, updates, and queries. Custom objects are not architected for this scale.
Reference: Salesforce Help & Training documentation on "Big Objects" and "Einstein Analytics Data Integration." The key concept is matching the data volume characteristic (massive, append-only) with the correct Salesforce architectural component (Big Objects).
Universal Containers (UC) is a business that works directly with individual consumers (B2C). They are moving from a current home-grown CRM system to Salesforce. UC has about one million consumer records. What should the architect recommend for optimal use of Salesforce functionality and also to avoid data loading issues?
A.
Create a Custom Object Individual Consumer c to load all individual consumers.
B.
Load all individual consumers as Account records and avoid using the Contact object.
C.
Load one Account record and one Contact record for each individual consumer.
D.
Create one Account and load individual consumers as Contacts linked to that one Account.
Load one Account record and one Contact record for each individual consumer.
Explanation:
This question tests knowledge of the standard Salesforce Data Model for Business-to-Consumer (B2C) scenarios.
🟢 Why C is Correct: The Salesforce platform's "Person Accounts" feature is the standard and optimal way to handle B2C data. It effectively creates a single record that represents both an Account (the business side) and a Contact (the person side). When enabled, this allows you to load data where each individual consumer is a single "Person Account" record. This leverages built-in Salesforce functionality (like standard page layouts, reports, and related lists) and is explicitly designed for this business model. It avoids the data loading complexity of trying to manage two separate objects (Account and Contact) for a single entity.
🔴 Why A is Incorrect: Creating a custom object for this purpose is an anti-pattern. It would prevent UC from using any of the standard Salesforce Sales and Service functionality built around the standard Account and Contact objects (e.g., Opportunities, Cases, Campaigns, Reports). It would essentially require rebuilding core CRM functionality from scratch.
🔴 Why B is Incorrect: Loading consumers only as Account records is not a supported model. Many standard Salesforce features, especially those related to messaging and activities (like Email, Tasks, Events), require an associated Contact. This approach would cripple the functionality of the platform.
🔴 Why D is Incorrect: This is known as the "bucket Account" model. While technically possible, it is strongly discouraged. It provides a poor user experience (all contacts are under one account, making them hard to find and report on), does not leverage the intended B2C functionality of the platform, and can lead to record ownership and sharing rule complications. Salesforce provides Person Accounts specifically to avoid this outdated practice.
🔧 Reference: Salesforce Data Model documentation, specifically the sections on "Person Accounts." The Platform Data Architect should always recommend using standard, supported features before considering custom or non-standard models.
Universal Containers (UC) uses Salesforce for tracking opportunities (Opportunity). UC uses an internal ERP system for tracking deliveries and invoicing. The ERP system supports SOAP API and OData for bi-directional integration between Salesforce and the ERP system. UC has about one million opportunities. For each opportunity, UC sends 12 invoices, one per month. UC sales reps have requirements to view current invoice status and invoice amount from the opportunity page. When creating an object to model invoices, what should the architect recommend, considering performance and data storage space?
A.
Use Streaming API to get the current status from the ERP and display on the Opportunity page.
B.
Create an external object Invoice _x with a Lookup relationship with Opportunity.
C.
Create a custom object Invoice _c with a master -detail relationship with Opportunity.
D.
Create a custom object Invoice _c with a Lookup relationship with Opportunity.
Create an external object Invoice _x with a Lookup relationship with Opportunity.
Explanation:
This question tests the ability to choose the right integration pattern and data storage type (external vs. internal) based on volume, system of record, and reporting requirements.
✅ Why B is Correct:
The key factors are data volume and system of record.
1. Volume: With 1 million opportunities and 12 invoices each, the total invoice volume is 12 million records. While this is manageable for a custom object, it consumes significant data storage, which is a licensed cost.
2. System of Record: The ERP system is the official system for invoices ("tracks deliveries and invoicing"). The requirement is only to view invoice data in Salesforce, not to create or edit it. An External Object is perfect for this. It creates a virtual representation of the ERP data within Salesforce without storing any data itself (saving storage costs). It supports OData, which is a standard protocol for exposing data. Users can view the data in a related list on the Opportunity page as if it were a native object, fulfilling the requirement perfectly.
❌ Why A is Incorrect:
Streaming API is for near-real-time notifications, not for displaying large sets of related data in a UI. It could push a notification that an invoice status changed, but it would not create a queryable list of all 12 invoices for an opportunity that a sales rep can easily scroll through. It solves the wrong part of the problem.
❌ Why C and D are Incorrect:
Both options recommend creating a custom object (a local Salesforce table that consumes data storage). While a lookup (D) is better than a master-detail (C) in this case to avoid cascading deletions, both are suboptimal. Storing a copy of 12 million records from an external system of record leads to data redundancy, requires complex ETL processes to keep the data in sync, and unnecessarily consumes expensive Salesforce data storage. External objects are the modern, cost-effective, and architecturally sound solution for this "view-only" requirement.
🔧 Reference:
Salesforce Integration Patterns and documentation on "External Objects" and "OData Connector." The principle is to use external objects when the data is owned and mastered in an external system and only needs to be read within Salesforce.
| Page 8 out of 22 Pages |
| Previous |