Data-Architect Practice Test Questions

257 Questions


NTO uses salesforce to manage relationships and track sales opportunities. It has 10 million customers and 100 million opportunities. The CEO has been complaining 10 minutes to run and sometimes failed to load, throwing a time out error. Which 3 options should help improve the dashboard performance? Choose 3 answers:


A.

Use selective queries to reduce the amount of data being returned.


B.

De-normalize the data by reducing the number of joins.


C.

Remove widgets from the dashboard to reduce the number of graphics loaded.


D.

Run the dashboard for CEO and send it via email.


E.

Reduce the amount of data queried by archiving unused opportunity records.





A.
  

Use selective queries to reduce the amount of data being returned.



B.
  

De-normalize the data by reducing the number of joins.



E.
  

Reduce the amount of data queried by archiving unused opportunity records.



Explanation:

Option A (✔️ Query Optimization) – Selective queries use indexed fields (e.g., CreatedDate, AccountId) to avoid full table scans:
Example:
SELECT Id FROM Opportunity WHERE AccountId = '001xx00000123ABC' AND CloseDate = THIS_QUARTER
Avoid non-selective filters (e.g., Status = 'Open' if 90% of records match).

Option B (✔️ Reduce Joins) – De-normalize data to minimize complex joins across 100M+ records:
Flatten data (e.g., store AccountName directly on Opportunity to avoid Account joins).
Use formula fields or roll-up summaries (e.g., DLRS) for aggregated values.

Option E (✔️ Data Archival) – Archive old/unused opportunities (e.g., closed >5 years ago) to:
Reduce query volume (e.g., exclude archived records from dashboards).
Use Big Objects or external databases for historical data.

Why Not the Others?

Option C (❌ UI Fix, Not Root Cause) – Fewer widgets may slightly improve load time but won’t fix query timeouts.
Option D (❌ Workaround, Not Solution) – Email solves the CEO’s frustration but ignores systemic performance issues.

All accounts and opportunities are created in Salesforce. Salesforce is integrated with three systems:
• An ERP system feeds order data into Salesforce and updates both Account and Opportunity records.
• An accounting system feeds invoice data into Salesforce and updates both Account and Opportunity records.
• A commission system feeds commission data into Salesforce and updates both Account and Opportunity records.
How should the architect determine which of these systems is the system of record?


A.

Account and opportunity data originates in Salesforce, and therefore Salesforce is the system of record.


B.

Whatever system updates the attribute or object should be the system of record for that field or object.


C.

Whatever integration data flow runs last will, by default, determine which system is the system of record.


D.

Data flows should be reviewed with the business users to determine the system of record per object or field.





D.
  

Data flows should be reviewed with the business users to determine the system of record per object or field.



Explanation:

✅ D. Review data flows with business users to determine the system of record per object or field

The system of record (SOR) is the authoritative source for a specific piece of data.

Business context is essential in deciding the SOR—it’s not just about where the data originates or which integration runs last.
Collaborating with business users helps identify:
1. Who owns the data
2. Which system has the most accurate or trusted version
3. What the operational workflows require
Often, different systems may be the SOR for different fields within the same object (e.g., billing address vs. sales territory on an Account).

Why Not the Others?

❌ A. Salesforce is the system of record because data originates there
Just because a record is created in Salesforce doesn’t mean Salesforce is the SOR for all its fields.
Fields may be updated or owned by ERP, accounting, or commission systems after creation.

❌ B. The system that updates a field is the system of record
The update source is not always authoritative—the field could be overwritten accidentally or reflect stale data.
You need intentional data governance, not just technical update logic.

❌ C. The last system to update determines the SOR
This is a technical coincidence, not a governance decision.
It can lead to data conflicts or overwrites if multiple systems update without coordination.

Get Cloud Consulting needs to integrate two different systems with customer records into the Salesforce Account object. So that no duplicate records are created in Salesforce, Master Data Management will be used. An Architect needs to determine which system is the system of record on a field level. What should the Architect do to achieve this goal?


A.

Master Data Management systems determine system of record, and the Architect doesn't have to think about what data is controlled by what system.


B.

Key stakeholders should review any fields that share the same purpose between systems to see how they will be used in Salesforce.


C.

The database schema for each external system should be reviewed, and fields with different names should always be separate fields in Salesforce.


D.

Any field that is an input field in either external system will be overwritten by the last record integrated and can never have a system of record.





B.
  

Key stakeholders should review any fields that share the same purpose between systems to see how they will be used in Salesforce.



Explanation:

Option B (✔️ Best Practice) – Stakeholder alignment ensures:
1. Field-Level Ownership: Clarifies which system "owns" specific fields (e.g., "Billing Address" from System A vs. "Shipping Address" from System B).
2. Business Rules: Matches field usage to operational needs (e.g., System A’s "Customer Tier" is used for reporting, System B’s for billing).
3. MDM Integration: MDM systems enforce these rules but require human-driven decisions first.

Why Not the Others?

Option A (❌ Hands-Off Risk) – MDM systems execute rules but can’t define them without stakeholder input.
Option C (❌ Technical Overfocus) – Schema reviews are useful, but field names ≠ ownership. Business context matters more.
Option D (❌ Chaotic) – Letting the "last sync win" guarantees conflicts and data corruption.

Universal Containers is integrating a new Opportunity engagement system with Salesforce. According to their Master Data Management strategy, Salesforce is the system of record for Account, Contact, and Opportunity data. However, there does seem to be valuable Opportunity data in the new system that potentially conflicts with what is stored in Salesforce. What is the recommended course of action to appropriately integrate this new system?


A.

The MDM strategy defines Salesforce as the system of record, so Salesforce Opportunity values prevail in all conflicts.


B.

A policy should be adopted so that the system whose record was most recently updated should prevail in conflicts.


C.

The Opportunity engagement system should become the system of record for Opportunity records.


D.

Stakeholders should be brought together to discuss the appropriate data strategy moving forward.





D.
  

Stakeholders should be brought together to discuss the appropriate data strategy moving forward.



Explanation:

Option D (✔️ Best Practice) – Stakeholder alignment is critical because:
1. MDM Strategy May Need Refinement: If the new system has valuable data, the "Salesforce as system of record" rule might require exceptions (e.g., certain Opportunity fields).
2. Conflict Resolution Rules: Business teams must define which fields prioritize Salesforce vs. the new system (e.g., "Salesforce owns Stage, but the new system owns Contract Terms").
3. Governance: Ensures compliance and avoids ad-hoc fixes.

Why Not the Others?

Option A (❌ Rigid) – Blindly favoring Salesforce ignores potentially critical data in the new system.
Option B (❌ Arbitrary) – "Last update wins" risks losing authoritative data (e.g., Salesforce may have older but more accurate values).
Option C (❌ Violates MDM Strategy) – Overriding the MDM strategy without review creates inconsistency.

Universal Containers is planning out their archiving and purging plans going forward for their custom objects Topic__c and Comment__c. Several options are being considered, including analytics snapshots, offsite storage, scheduled purges, etc. Which three questions should be considered when designing an appropriate archiving strategy?


A.

How many fields are defined on the custom objects that need to be archived?


B.

Which profiles and users currently have access to these custom object records?


C.

If reporting is necessary, can the information be aggregated into fewer, summary records?


D.

Will the data being archived need to be reported on or accessed in any way in the future?


E.

Are there any regulatory restrictions that will influence the archiving and purging plans?





C.
  

If reporting is necessary, can the information be aggregated into fewer, summary records?



D.
  

Will the data being archived need to be reported on or accessed in any way in the future?



E.
  

Are there any regulatory restrictions that will influence the archiving and purging plans?



Explanation:

✅ C. Can the data be summarized?
If the data is only needed for reporting purposes, it may not be necessary to store the entire dataset.
Instead, summary records or analytics snapshots could be retained for long-term trend reporting, reducing storage while retaining business value.

✅ D. Will the archived data need to be accessed or reported on?
This determines how and where the archived data should be stored:
If frequent access is required: consider archiving within Salesforce or via Salesforce Connect.
If rarely accessed: consider off-platform archiving (e.g., external database or data lake).

✅ E. Are there regulatory restrictions?
Compliance requirements (e.g., GDPR, HIPAA, SOX) may dictate:
How long data must be retained
Where it must be stored
When it must be deleted
These rules are essential to shape the retention and deletion policies in the strategy.

Why Not the Others?

❌ A. How many fields are defined on the custom objects?
While this may affect storage size, it is not a critical factor in determining the overall archiving strategy.
Archiving strategy is more concerned with data volume, access patterns, and regulatory rules.

❌ B. Which profiles and users have access?
User access might influence security controls for archived data but is not central to defining an archiving and purging plan.
It becomes relevant after the archive location and method are chosen.

Universal Containers has 30 million case records. The Case object has 80 fields. Agents are reporting reports in the Salesforce org. Which solution should a data architect recommend to improve reporting performance?


A.

Create a custom object to store aggregate data and run reports.


B.

Contact Salesforce support to enable skinny table for cases.


C.

Move data off of the platform and run reporting outside Salesforce, and give access to reports.


D.

Build reports using custom Lightning components.





A.
  

Create a custom object to store aggregate data and run reports.



Explanation:

✅ A. Create a custom object to store aggregate data

With 30 million Case records and 80 fields, querying and reporting on the full dataset in real time can be slow and inefficient.
Creating a custom reporting or summary object that stores pre-aggregated metrics (e.g., cases per product, cases by status, weekly case volumes) allows:
1. Faster report execution
2. Reduced load on the Case object
3. Better user experience for agents needing quick insights
These summary objects can be updated on a scheduled basis (e.g., nightly via batch jobs or dataflows).

Why Not the Others?

❌ B. Enable Skinny Table
Skinny tables help improve query performance, but:
They are managed by Salesforce Support
They are limited in flexibility (e.g., no formula, lookup, or long text fields)
They don't solve aggregation/reporting needs effectively
They're more suited to record retrieval, not summary-level reports.

❌ C. Move data off-platform
Off-platform reporting may work but comes with significant complexity:
ETL processes
Sync challenges
Licensing and access control issues
This is a heavier architectural solution not ideal for frontline users like agents who need native access.

❌ D. Custom Lightning components for reports
Custom components may enhance UI presentation, but they do not solve the root performance issue with reporting on massive data volumes.
They still depend on underlying SOQL and report engine performance.

Universals Containers’ system administrators have been complaining that they are not able to make changes to its users’ record, including moving them to new territories without getting “unable to lock row” errors. This is causing the system admins to spend hours updating user records every day. What should the data architect do to prevent the error?


A.

Reduce number of users updated concurrently.


B.

Enable granular locking.


C.

Analyze Splunk query to spot offending records.


D.

Increase CPU for the Salesforce org.





B.
  

Enable granular locking.



Explanation:

Correct Answer (B): Enable granular locking to prevent "unable to lock row" errors by allowing smaller, concurrent updates to user records, reducing contention and improving admin efficiency without batch restrictions.

Why Others Fail:

A: Reducing concurrent updates delays processes but doesn’t resolve the root locking issue.
C: Splunk identifies errors but doesn’t prevent them during record updates.
D: CPU boosts don’t address row-locking conflicts in Salesforce transactions.

Northern Trail Outfitters (NTO) wants to implement backup and restore for Salesforce data, Currently, it has data backup processes that runs weekly, which back up all Salesforce data to an enterprise data warehouse (EDW). NTO wants to move to daily backups and provide restore capability to avoid any data loss in case of outage. What should a data architect recommend for a daily backup and restore solution?


A.

Use AppExchange package for backup and restore.


B.

Use ETL for backup and restore from EDW.


C.

Use Bulk API to extract data on daily basis to EDW and REST API for restore.


D.

Change weekly backup process to daily backup, and implement a custom restore solution.





A.
  

Use AppExchange package for backup and restore.



Explanation:

Use an AppExchange package like OwnBackup or Gearset for automated daily backups and restore capabilities, ensuring compliance, minimal manual effort, and point-in-time recovery directly within Salesforce.

Why Others Fail:

B: ETL backups lack native restore features and require complex manual processes for recovery.
C: Bulk API extracts data but doesn’t provide streamlined, user-friendly restore functionality.
D: Custom solutions are costly, time-consuming, and prone to errors compared to pre-built tools.

How can an architect find information about who is creating, changing, or deleting certain fields within the past two months?


A. Remove "customize application" permissions from everyone else.


B. Export the metadata and search it for the fields in question.


C. Create a field history report for the fields in question.


D. Export the setup audit trail and find the fields in question.





D.
  Export the setup audit trail and find the fields in question.

Explanation:

Export the Setup Audit Trail to track all metadata changes (create/edit/delete) by user and timestamp, filtering for specific fields over the past two months.

Why Others Fail:

❌ A: Removing permissions disrupts workflows but doesn’t provide historical change data.
❌ B: Metadata exports show current state, not who made changes or when.
❌ C: Field History tracks record data, not schema changes

Every year, Ursa Major Solar has more than 1 million orders. Each order contains an average of 10 line items. The Chief Executive Officer (CEO) needs the Sales Reps to see how much money each customer generates year-over-year. However, data storage is running low in Salesforce. Which approach for data archiving is appropriate for this scenario?


A. 1. Annually export and delete order line items.
2. Store them in a zip file in case the data is needed later.


B. 1. Annually aggregate order amount data to store in a custom object.
2. Delete those orders and order line items.


C. 1. Annually export and delete orders and order line items.
2. Store them in a zip file in case the data is needed later.


D. 1. Annually delete orders and order line items.
2. Ensure the customer has order information in another system.





B.
  1. Annually aggregate order amount data to store in a custom object.
2. Delete those orders and order line items.

Explanation:

To manage storage and still meet the CEO’s reporting needs, aggregate order revenue per customer annually into a custom object. This retains key business insights like year-over-year revenue without storing every detailed order or line item. Then, safely delete the detailed records to free up storage in Salesforce.

Universal Containers (UC) is launching an RFP to acquire a new accounting product available on AppExchange. UC is expecting to issue 5 million invoices per year, with each invoice containing an average of 10 line items. What should UC's Data Architect recommend to ensure scalability?


A.

Ensure invoice line items simply reference existing Opportunity line items.


B.

Ensure the account product vendor includes Wave Analytics in their offering.


C.

Ensure the account product vendor provides a sound data archiving strategy.


D.

Ensure the accounting product runs 100% natively on the Salesforce platform.





C.
  

Ensure the account product vendor provides a sound data archiving strategy.



Explanation:

The Data Architect should prioritize scalability and data volume management by recommending that the accounting product vendor provides a sound data archiving strategy (Option C). With 5 million invoices annually (50 million line items), UC risks hitting Salesforce storage limits and performance degradation without a plan to archive or offload historical data. A robust archiving strategy—such as automated purges, Big Objects, or external storage integration—ensures the system remains responsive while retaining compliance access to older records. This approach addresses the core challenge of volume without compromising functionality.

Why Others Fail:

Referencing Opportunity Line Items (Option A):
While this reduces redundancy, it doesn’t solve the sheer volume of invoice line items. Performance would still suffer as the database grows exponentially.

Wave Analytics (Option B):
Analytics tools are useful for reporting but irrelevant to transactional scalability. They don’t mitigate storage or processing loads from millions of records.

100% Native Platform (Option D):
Native tools simplify integration but lack built-in solutions for massive data volumes. Archiving is still necessary to avoid platform limits.Revoking "Customize Application" permissions (Option A) prevents future changes but provides no historical data, leaving past modifications untraceable.

Universal Containers (UC) is building a Service Cloud call center application and has a multi-system support solution. UC would like or ensure that all systems have access to the same customer information. What solution should a data architect recommend?


A.

Make Salesforce the system of record for all data.


B.

Implement a master data management (MDM) strategy for customer data.


C.

Load customer data in all systems.


D.

Let each system be an owner of data it generates.





B.
  

Implement a master data management (MDM) strategy for customer data.



Explanation:

B. Implement a master data management (MDM) strategy for customer data.
Why? MDM creates a single, trusted source of customer data shared across all systems, eliminating duplicates and inconsistencies.
Best for: Multi-system environments where data must stay synchronized (like UC’s call center).

Why Others Fail:

A. Make Salesforce the system of record
Forces all systems to depend on Salesforce, which may not suit systems needing autonomy (e.g., legacy tools).

C. Load customer data in all systems
Causes data redundancy, sync delays, and inconsistencies (e.g., updates in one system won’t reflect elsewhere).

D. Let each system own its data
Leads to fragmented, conflicting data (e.g., different contact info in different systems).


Page 3 out of 22 Pages
Previous