NTO uses salesforce to manage relationships and track sales opportunities. It has 10 million customers and 100 million opportunities. The CEO has been complaining 10 minutes to run and sometimes failed to load, throwing a time out error. Which 3 options should help improve the dashboard performance? Choose 3 answers:
A.
Use selective queries to reduce the amount of data being returned.
B.
De-normalize the data by reducing the number of joins.
C.
Remove widgets from the dashboard to reduce the number of graphics loaded.
D.
Run the dashboard for CEO and send it via email.
E.
Reduce the amount of data queried by archiving unused opportunity records.
Use selective queries to reduce the amount of data being returned.
De-normalize the data by reducing the number of joins.
Reduce the amount of data queried by archiving unused opportunity records.
Explanation:
Option A (✔️ Query Optimization) – Selective queries use indexed fields (e.g., CreatedDate, AccountId) to avoid full table scans:
Example:
SELECT Id FROM Opportunity WHERE AccountId = '001xx00000123ABC' AND CloseDate = THIS_QUARTER
Avoid non-selective filters (e.g., Status = 'Open' if 90% of records match).
Option B (✔️ Reduce Joins) – De-normalize data to minimize complex joins across 100M+ records:
Flatten data (e.g., store AccountName directly on Opportunity to avoid Account joins).
Use formula fields or roll-up summaries (e.g., DLRS) for aggregated values.
Option E (✔️ Data Archival) – Archive old/unused opportunities (e.g., closed >5 years ago) to:
Reduce query volume (e.g., exclude archived records from dashboards).
Use Big Objects or external databases for historical data.
Why Not the Others?
Option C (❌ UI Fix, Not Root Cause) – Fewer widgets may slightly improve load time but won’t fix query timeouts.
Option D (❌ Workaround, Not Solution) – Email solves the CEO’s frustration but ignores systemic performance issues.
All accounts and opportunities are created in Salesforce. Salesforce is integrated with three systems:
• An ERP system feeds order data into Salesforce and updates both Account and Opportunity records.
• An accounting system feeds invoice data into Salesforce and updates both Account and Opportunity records.
• A commission system feeds commission data into Salesforce and updates both Account and Opportunity records.
How should the architect determine which of these systems is the system of record?
A.
Account and opportunity data originates in Salesforce, and therefore Salesforce is the system of record.
B.
Whatever system updates the attribute or object should be the system of record for that field or object.
C.
Whatever integration data flow runs last will, by default, determine which system is the system of record.
D.
Data flows should be reviewed with the business users to determine the system of record per object or field.
Data flows should be reviewed with the business users to determine the system of record per object or field.
Explanation:
✅ D. Review data flows with business users to determine the system of record per object or field
The system of record (SOR) is the authoritative source for a specific piece of data.
Business context is essential in deciding the SOR—it’s not just about where the data originates or which integration runs last.
Collaborating with business users helps identify:
1. Who owns the data
2. Which system has the most accurate or trusted version
3. What the operational workflows require
Often, different systems may be the SOR for different fields within the same object (e.g., billing address vs. sales territory on an Account).
Why Not the Others?
❌ A. Salesforce is the system of record because data originates there
Just because a record is created in Salesforce doesn’t mean Salesforce is the SOR for all its fields.
Fields may be updated or owned by ERP, accounting, or commission systems after creation.
❌ B. The system that updates a field is the system of record
The update source is not always authoritative—the field could be overwritten accidentally or reflect stale data.
You need intentional data governance, not just technical update logic.
❌ C. The last system to update determines the SOR
This is a technical coincidence, not a governance decision.
It can lead to data conflicts or overwrites if multiple systems update without coordination.
Get Cloud Consulting needs to integrate two different systems with customer records into the Salesforce Account object. So that no duplicate records are created in Salesforce, Master Data Management will be used. An Architect needs to determine which system is the system of record on a field level. What should the Architect do to achieve this goal?
A.
Master Data Management systems determine system of record, and the Architect doesn't have to think about what data is controlled by what system.
B.
Key stakeholders should review any fields that share the same purpose between systems to see how they will be used in Salesforce.
C.
The database schema for each external system should be reviewed, and fields with different names should always be separate fields in Salesforce.
D.
Any field that is an input field in either external system will be overwritten by the last record integrated and can never have a system of record.
Key stakeholders should review any fields that share the same purpose between systems to see how they will be used in Salesforce.
Explanation:
Option B (✔️ Best Practice) – Stakeholder alignment ensures:
1. Field-Level Ownership: Clarifies which system "owns" specific fields (e.g., "Billing Address" from System A vs. "Shipping Address" from System B).
2. Business Rules: Matches field usage to operational needs (e.g., System A’s "Customer Tier" is used for reporting, System B’s for billing).
3. MDM Integration: MDM systems enforce these rules but require human-driven decisions first.
Why Not the Others?
Option A (❌ Hands-Off Risk) – MDM systems execute rules but can’t define them without stakeholder input.
Option C (❌ Technical Overfocus) – Schema reviews are useful, but field names ≠ ownership. Business context matters more.
Option D (❌ Chaotic) – Letting the "last sync win" guarantees conflicts and data corruption.
Universal Containers is integrating a new Opportunity engagement system with Salesforce. According to their Master Data Management strategy, Salesforce is the system of record for Account, Contact, and Opportunity data. However, there does seem to be valuable Opportunity data in the new system that potentially conflicts with what is stored in Salesforce. What is the recommended course of action to appropriately integrate this new system?
A.
The MDM strategy defines Salesforce as the system of record, so Salesforce Opportunity values prevail in all conflicts.
B.
A policy should be adopted so that the system whose record was most recently updated should prevail in conflicts.
C.
The Opportunity engagement system should become the system of record for Opportunity records.
D.
Stakeholders should be brought together to discuss the appropriate data strategy moving forward.
Stakeholders should be brought together to discuss the appropriate data strategy moving forward.
Explanation:
Option D (✔️ Best Practice) – Stakeholder alignment is critical because:
1. MDM Strategy May Need Refinement: If the new system has valuable data, the "Salesforce as system of record" rule might require exceptions (e.g., certain Opportunity fields).
2. Conflict Resolution Rules: Business teams must define which fields prioritize Salesforce vs. the new system (e.g., "Salesforce owns Stage, but the new system owns Contract Terms").
3. Governance: Ensures compliance and avoids ad-hoc fixes.
Why Not the Others?
Option A (❌ Rigid) – Blindly favoring Salesforce ignores potentially critical data in the new system.
Option B (❌ Arbitrary) – "Last update wins" risks losing authoritative data (e.g., Salesforce may have older but more accurate values).
Option C (❌ Violates MDM Strategy) – Overriding the MDM strategy without review creates inconsistency.
Universal Containers is planning out their archiving and purging plans going forward for their custom objects Topic__c and Comment__c. Several options are being considered, including analytics snapshots, offsite storage, scheduled purges, etc. Which three questions should be considered when designing an appropriate archiving strategy?
A.
How many fields are defined on the custom objects that need to be archived?
B.
Which profiles and users currently have access to these custom object records?
C.
If reporting is necessary, can the information be aggregated into fewer, summary records?
D.
Will the data being archived need to be reported on or accessed in any way in the future?
E.
Are there any regulatory restrictions that will influence the archiving and purging plans?
If reporting is necessary, can the information be aggregated into fewer, summary records?
Will the data being archived need to be reported on or accessed in any way in the future?
Are there any regulatory restrictions that will influence the archiving and purging plans?
Explanation:
✅ C. Can the data be summarized?
If the data is only needed for reporting purposes, it may not be necessary to store the entire dataset.
Instead, summary records or analytics snapshots could be retained for long-term trend reporting, reducing storage while retaining business value.
✅ D. Will the archived data need to be accessed or reported on?
This determines how and where the archived data should be stored:
If frequent access is required: consider archiving within Salesforce or via Salesforce Connect.
If rarely accessed: consider off-platform archiving (e.g., external database or data lake).
✅ E. Are there regulatory restrictions?
Compliance requirements (e.g., GDPR, HIPAA, SOX) may dictate:
How long data must be retained
Where it must be stored
When it must be deleted
These rules are essential to shape the retention and deletion policies in the strategy.
Why Not the Others?
❌ A. How many fields are defined on the custom objects?
While this may affect storage size, it is not a critical factor in determining the overall archiving strategy.
Archiving strategy is more concerned with data volume, access patterns, and regulatory rules.
❌ B. Which profiles and users have access?
User access might influence security controls for archived data but is not central to defining an archiving and purging plan.
It becomes relevant after the archive location and method are chosen.
Universal Containers has 30 million case records. The Case object has 80 fields. Agents are reporting reports in the Salesforce org. Which solution should a data architect recommend to improve reporting performance?
A.
Create a custom object to store aggregate data and run reports.
B.
Contact Salesforce support to enable skinny table for cases.
C.
Move data off of the platform and run reporting outside Salesforce, and give access to reports.
D.
Build reports using custom Lightning components.
Create a custom object to store aggregate data and run reports.
Explanation:
✅ A. Create a custom object to store aggregate data
With 30 million Case records and 80 fields, querying and reporting on the full dataset in real time can be slow and inefficient.
Creating a custom reporting or summary object that stores pre-aggregated metrics (e.g., cases per product, cases by status, weekly case volumes) allows:
1. Faster report execution
2. Reduced load on the Case object
3. Better user experience for agents needing quick insights
These summary objects can be updated on a scheduled basis (e.g., nightly via batch jobs or dataflows).
Why Not the Others?
❌ B. Enable Skinny Table
Skinny tables help improve query performance, but:
They are managed by Salesforce Support
They are limited in flexibility (e.g., no formula, lookup, or long text fields)
They don't solve aggregation/reporting needs effectively
They're more suited to record retrieval, not summary-level reports.
❌ C. Move data off-platform
Off-platform reporting may work but comes with significant complexity:
ETL processes
Sync challenges
Licensing and access control issues
This is a heavier architectural solution not ideal for frontline users like agents who need native access.
❌ D. Custom Lightning components for reports
Custom components may enhance UI presentation, but they do not solve the root performance issue with reporting on massive data volumes.
They still depend on underlying SOQL and report engine performance.
Universals Containers’ system administrators have been complaining that they are not able to make changes to its users’ record, including moving them to new territories without getting “unable to lock row” errors. This is causing the system admins to spend hours updating user records every day. What should the data architect do to prevent the error?
A. Reduce number of users updated concurrently.
B. Enable granular locking.
C. Analyze Splunk query to spot offending records.
D. Increase CPU for the Salesforce org.
Explanation:
“Unable to lock row” happens when multiple processes or users attempt to update the same record (or related records) at the same time. In Territory Management and Account Sharing, this is common because Salesforce by default locks entire account hierarchies or user groups during updates.
Granular locking changes the locking behavior: instead of locking entire hierarchies, Salesforce locks smaller record groups. This reduces contention and makes operations like moving users between territories or updating large sets of accounts less likely to fail. It’s specifically designed to address “row lock” issues in environments with large data volumes and complex sharing.
Why not the others?
A. Reduce number of users updated concurrently: Might lower the chance of conflicts, but it doesn’t eliminate the core issue. It also slows down business processes.
C. Analyze Splunk query: Monitoring tools may show errors but won’t fix the underlying Salesforce locking mechanism.
D. Increase CPU for the Salesforce org: Salesforce is a multi-tenant platform. Customers cannot allocate CPU; scaling is managed by Salesforce itself.
Reference:
Salesforce Help: Granular Locking Overview
Northern Trail Outfitters (NTO) wants to implement backup and restore for Salesforce data, Currently, it has data backup processes that runs weekly, which back up all Salesforce data to an enterprise data warehouse (EDW). NTO wants to move to daily backups and provide restore capability to avoid any data loss in case of outage. What should a data architect recommend for a daily backup and restore solution?
A. Use AppExchange package for backup and restore.
B. Use ETL for backup and restore from EDW.
C. Use Bulk API to extract data on daily basis to EDW and REST API for restore.
D. Change weekly backup process to daily backup, and implement a custom restore solution.
Explanation:
While Salesforce offers multiple options for exporting data, restore is the tricky part. Simply extracting data into an EDW or flat files isn’t enough, because restoring requires handling parent-child relationships, metadata dependencies, and maintaining referential integrity.
AppExchange backup and restore solutions (like OwnBackup, Spanning, or Odaseva) are purpose-built for Salesforce. They provide:
→ Automated daily backups
→ Point-in-time recovery
→ Metadata and relationship-aware restores
→ Sandbox seeding and compliance features
This makes them the most reliable and least risky approach for backup and restore.
Why not the others?
B. ETL to/from EDW: Backups are possible, but restore is complex and error-prone. You’d need custom scripts to rebuild relationships.
C. Bulk API + REST API: Reinventing the wheel. Too much maintenance and still risky for restore accuracy.
D. Daily backup + custom restore: Same issue — costly, error-prone, and lacks the resilience of proven tools.
Reference:
Salesforce Help: Backup and Restore Solutions
AppExchange Backup & Restore Solutions
How can an architect find information about who is creating, changing, or deleting certain fields within the past two months?
A. Remove "customize application" permissions from everyone else.
B. Export the metadata and search it for the fields in question.
C. Create a field history report for the fields in question.
D. Export the setup audit trail and find the fields in question.
Explanation:
The Setup Audit Trail is Salesforce’s way of tracking administrative changes — such as creating, modifying, or deleting fields. It captures who made the change, what was changed, and when the change happened. The audit trail stores history for the last 6 months, which covers the request to look back 2 months. You can view the trail in Salesforce or export it as CSV for more detailed analysis.
🔴 Why not the others?
A. Remove "customize application": Prevents future changes but doesn’t provide history of past changes.
B. Export metadata: Shows the current state of fields, not the change history.
C. Field history report: Tracks data changes inside records, not configuration or metadata changes like field creation/deletion.
Reference:
Salesforce Help: Monitor Setup Changes with the Audit Trail
Every year, Ursa Major Solar has more than 1 million orders. Each order contains an average of 10 line items. The Chief Executive Officer (CEO) needs the Sales Reps to see how much money each customer generates year-over-year. However, data storage is running low in Salesforce. Which approach for data archiving is appropriate for this scenario?
A. 1. Annually export and delete order line items.
2. Store them in a zip file in case the data is needed later.
B. 1. Annually aggregate order amount data to store in a custom object.
2. Delete those orders and order line items.
C. 1. Annually export and delete orders and order line items.
2. Store them in a zip file in case the data is needed later.
D. 1. Annually delete orders and order line items.
2. Ensure the customer has order information in another system.
Explanation:
🟢 Option B is the correct and most efficient approach. This solution directly addresses both the CEO's requirement and the data storage issue.
✔️ "Annually aggregate order amount data to store in a custom object" directly meets the CEO's need for year-over-year customer revenue tracking. This new, smaller record holds the summarized data (total amount for all orders in a given year for a specific customer), which is all that's needed for the report.
✔️ "Delete those orders and order line items" is the key archiving step that frees up significant data storage. The original 1 million orders and 10 million line items per year are no longer needed for daily operations, and their summarized data is stored in the new custom object, which takes up a fraction of the storage space. This is a classic summary archiving strategy.
🔴 Option A and C are incorrect. While exporting and deleting data addresses the storage problem, simply storing the raw data in a zip file outside Salesforce doesn't meet the CEO's requirement for sales reps to "see how much money each customer generates year-over-year" within the Salesforce platform. The data would not be accessible for reporting.
🔴 Option D is incorrect. Deleting the records without retaining a summary in Salesforce or ensuring the sales reps have access to the information they need would fail to meet the business requirement. This approach focuses solely on the storage problem at the expense of functionality.
Universal Containers (UC) is launching an RFP to acquire a new accounting product available on AppExchange. UC is expecting to issue 5 million invoices per year, with each invoice containing an average of 10 line items. What should UC's Data Architect recommend to ensure scalability?
A. Ensure invoice line items simply reference existing Opportunity line items.
B. Ensure the account product vendor includes Wave Analytics in their offering.
C. Ensure the account product vendor provides a sound data archiving strategy.
D.
Ensure the accounting product runs 100% natively on the Salesforce platform.
Explanation:
Why C is correct?
✅ Option C is the most critical recommendation for ensuring long-term scalability. With an expected 5 million invoices and 50 million invoice line items per year, the data volume will quickly exceed Salesforce's storage limits and degrade performance. A data architect's primary responsibility is to manage this volume. A sound data archiving strategy is a fundamental part of the product's architecture that a vendor must have to handle this volume.
Why Other Options are incorrect?
❌ Option A is incorrect. Referencing Opportunity Line Items is not relevant to the new accounting product's scalability. While it might be part of the product's data model, it doesn't solve the problem of managing the massive volume of new invoice and invoice line item records.
❌ Option B is incorrect. While Wave Analytics (now CRM Analytics) is a great tool for analyzing large datasets, it doesn't solve the underlying problem of data storage and platform performance. It is a reporting and analytics tool, not a data management or archiving solution.
❌ Option D is incorrect. Running 100% natively on the Salesforce platform is a common requirement for AppExchange products, but it doesn't inherently guarantee scalability for high-volume data. A native app can still cause performance and storage issues if it doesn't have a built-in archiving strategy to manage its growth. The data volume described is the key concern.
Universal Containers (UC) is building a Service Cloud call center application and has a multi-system support solution. UC would like or ensure that all systems have access to the same customer information. What solution should a data architect recommend?
A.
Make Salesforce the system of record for all data.
B.
Implement a master data management (MDM) strategy for customer data.
C.
Load customer data in all systems.
D.
Let each system be an owner of data it generates.
Implement a master data management (MDM) strategy for customer data.
Explanation:
✅ Option B is the correct solution. The problem describes a common scenario in large enterprises: multiple systems (often called a multi-system landscape) needing consistent and accurate customer information. Master Data Management (MDM) is the discipline and set of tools used to create a single, authoritative source of master data (in this case, customer data). An MDM solution would ensure that all connected systems are accessing a "golden record" of the customer, preventing data inconsistencies and ensuring everyone has the "same customer information."
❌ Option A is often part of an MDM strategy, but it's not the complete solution. Simply making Salesforce the system of record (SoR) doesn't solve the problem of propagating that data consistently to other systems or resolving data conflicts if other systems also create customer records. An MDM strategy would define the rules for this synchronization and data governance.
❌ Option C is incorrect. Loading customer data into all systems without a central management strategy would lead to massive data inconsistencies, as each system would likely have its own version of the customer data, leading to a fragmented and unreliable view.
❌ Option D is incorrect. Letting each system be the "owner of data it generates" is a recipe for data silos and inconsistencies. This is the very problem that an MDM strategy is designed to solve. It would lead to a fragmented customer view, where different departments or systems have conflicting information about the same customer.
Page 3 out of 22 Pages |
Previous |