Universal Containers (UC) has built a custom application on Salesforce to help track shipments around the world. A majority of the shipping records are stored on premise in an external data source. UC needs shipment details to be exposed to the custom application, and the data needs to be accessible in real time. The external data source is not OData enabled, and UC does not own a middleware tool. Which Salesforce Connect procedure should a data architect use to ensure UC's requirements are met?
A. Write an Apex class that makes a REST callout to the external API.
B. Develop a process that calls an inviable web service method.
C. Migrate the data to Heroku and register Postgres as a data source.
D. Write a custom adapter with the Apex Connector Framework.
Explanation:
Salesforce Connect lets Salesforce integrate external systems in real time without duplicating data. By default, it relies on OData-enabled sources, but when a source isn’t OData-enabled, architects can use the Apex Connector Framework to build a custom adapter. This adapter defines how Salesforce queries and displays data from the external system. It ensures shipment details are always up to date, accessible in Salesforce as external objects, and doesn’t require middleware.
🚫 Why not the others?
A. REST callout with Apex:
A callout only retrieves data on demand but doesn’t provide the seamless external object experience that Salesforce Connect gives. Reps wouldn’t see data in list views, reports, or related lists without custom UI.
B. Invocable web service method:
Invocable methods are meant for process automation. They can’t expose external data in real time as Salesforce records, so they don’t solve UC’s reporting and integration needs.
C. Heroku Postgres as source:
Migrating to Heroku adds cost and complexity. UC needs direct integration with the on-prem system, not another data store in the middle.
📚 Reference:
Salesforce Developer Guide: Apex Connector Framework
Salesforce Help: Salesforce Connect Overview
Universal Container (UC) has accumulated data over years and has never deleted data from its Salesforce org. UC is now exceeding the storage allocations in the org. UC is now looking for option to delete unused from the org. Which three recommendations should a data architect make is order to reduce the number of records from the org? (Choose 3 answers)
A. Use hard delete in Bulk API to permanently delete records from Salesforce.
B. Use hard delete in batch Apex to permanently delete records from Salesforce.
C. Identify records in objects that have not been modified or used In last 3 years.
D. Use Rest API to permanently delete records from the Salesforce org.
E.
Archive the records in enterprise data warehouse (EDW) before deleting from Salesforce.
Archive the records in enterprise data warehouse (EDW) before deleting from Salesforce.
Explanation:
This question addresses a common scenario of managing org storage limits through a responsible data lifecycle strategy, emphasizing compliance and best practices.
🟢 Why A is Correct:
The Bulk API is the most efficient and recommended tool for performing large-scale data deletion operations in Salesforce. It is specifically designed for processing high volumes of records. Using the "hard delete" option in the Bulk API ensures the records are permanently deleted and moved to the Recycle Bin, which is crucial for actually reclaiming storage space.
🟢 Why C is Correct:
The first and most critical step is to identify which data is truly eligible for deletion. You cannot arbitrarily delete data; you must follow a defined data retention policy. Identifying records that have not been modified or used in the last 3 years (or another business-agreed timeframe) is a standard, defensible criterion for archiving and deletion. This ensures business-critical data is not accidentally removed.
🟢 Why E is Correct:
Before performing any permanent ("hard") delete operation in Salesforce, it is a mandatory best practice for compliance and audit purposes to archive a copy of the data in a secure, long-term storage solution like an Enterprise Data Warehouse (EDW). This creates a permanent record that can be referenced if needed for legal, financial, or historical reasons after the data is removed from the operational Salesforce system.
🔴 Why B is Incorrect (hard delete in batch Apex):
While Batch Apex can delete records, it is a more complex and resource-intensive solution on the Salesforce platform compared to using the external, optimized Bulk API. The Bulk API is the simpler, more standard, and more efficient tool for this specific task of bulk deletion. Batch Apex should be used for more complex logic where data needs to be processed before deletion.
🔴 Why D is Incorrect (REST API):
The standard REST API is not designed for the high-volume deletion of millions of records. It is better suited for real-time, transactional interactions (e.g., creating or updating a few records from a mobile app). Using it for a mass deletion job would be extremely slow and would likely hit API rate limits.
Reference:
Data lifecycle management and storage management are key responsibilities of a Data Architect. The process should always be:
1) Identify data against a policy (C)
2) Archive it (E)
3) Use the most efficient tool to delete it (A)
North Trail Outfitters (NTO) operates a majority of its business from a central Salesforce org, NTO also owns several secondary orgs that the service, finance, and marketing teams work out of, At the moment, there is no integration between central and secondary orgs, leading to data-visibility issues. Moving forward, NTO has identified that a hub-and-spoke model is the proper architect to manage its data, where the central org is the hub and the secondary orgs are the spokes. Which tool should a data architect use to orchestrate data between the hub org and spoke orgs?
A. A middleware solution that extracts and distributes data across both the hub and spokes.
B. Develop custom APIs to poll the hub org for change data and push into the spoke orgs.
C. Develop custom APIs to poll the spoke for change data and push into the org.
D. A backup and archive solution that extracts and restores data across orgs.
Explanation:
This question tests the understanding of integration patterns, specifically the implementation of a hub-and-spoke model for multi-org architectures.
Why A is Correct: A middleware solution (like MuleSoft, Informatica, Jitterbit, etc.) is the ideal tool to orchestrate a complex hub-and-spoke integration pattern. It acts as a central "broker" that can:
⇒ Extract data from the hub (central org) based on events or a schedule.
⇒ Apply transformation logic to make the data suitable for each spoke org.
⇒ Load (distribute) the transformed data to the appropriate spoke orgs.
⇒ Handle errors, retries, and logging across the entire integration flow.
This provides a scalable, maintainable, and monitored integration architecture.
Why B is Incorrect (Custom APIs to poll hub and push to spokes): While technically possible, building a custom API solution is a "build" approach that creates significant long-term maintenance overhead. The custom code would need to handle all aspects of error handling, security, transformation, and routing. A pre-built middleware platform is a "buy" approach that provides these capabilities out-of-the-box and is the recommended best practice.
Why C is Incorrect (Custom APIs to poll spokes and push to hub): This suggests the spokes are the source of truth, which contradicts the defined hub-and-spoke model where the central org is the hub (master). Data should flow from the hub out to the spokes, not be collected from the spokes into the hub in this manner.
Why D is Incorrect (Backup and archive solution): Tools like Data Loader or backup services are for one-time data migration or point-in-time recovery. They are not for ongoing, operational, near-real-time data orchestration between live systems. They lack the transformation, routing, and real-time triggering capabilities needed for this scenario.
Reference: The use of middleware for complex, multi-point integrations is a standard industry pattern. It abstracts the complexity away from individual systems and centralizes the management of data flows.
Get Cloudy Consulting monitors 15,000 servers, and these servers automatically record their status every 10 minutes. Because of company policy, these status reports must be maintained for 5 years. Managers at Get Cloudy Consulting need access to up to one week's worth of these status reports with all of their details. An Architect is recommending what data should be integrated into Salesforce and for how long it should be stored in Salesforce. Which two limits should the Architect be aware of? (Choose two.)
A. Data storage limits
B. Workflow rule limits
C. API Request limits
D. Webservice callout limits
Explanation:
This question requires evaluating a high-volume data scenario and identifying the most constraining platform limits that would be impacted.
✅ Why A is Correct (Data storage limits): This is the most obvious and critical limit.
✔️ Calculation: 15,000 servers * 6 status reports per hour * 24 hours * 7 days = 15,120,000 records per week.
Storing even a single week's worth of this data would consume a massive amount of Salesforce data storage. Storing 5 years' worth (as required by policy) is not feasible in Salesforce. The architect must be aware that this volume of data would quickly exhaust the org's storage allocation.
✅ Why C is Correct (API Request limits): To get the detailed status reports into Salesforce, a high-volume integration would be required. Every attempt to insert 15+ million records per week would consume an enormous number of API requests. Salesforce orgs have 24-hour rolling limits on API calls (which vary by edition). This integration would likely hit those limits, preventing other integrations and tools from functioning.
❌ Why B is Incorrect (Workflow rule limits): Workflow rules are a legacy automation tool and have limits on the number of active rules per object. However, this is not the primary constraint. The sheer volume of data and the API calls needed to bring it in are far more limiting factors.
❌ Why D is Incorrect (Webservice callout limits): Callout limits govern outbound calls from Apex code to external systems. This scenario describes inbound data being sent into Salesforce. Therefore, callout limits are not relevant. The relevant inbound limit is the API Request limit.
Reference: A Data Architect must be able to perform rough "napkin math" to assess the feasibility of storing and integrating large data volumes. This scenario is a classic case where the data should be stored in an external system (like a data lake), and only aggregated summaries or recent exceptions should be integrated into Salesforce.
UC has one SF org (Org A) and recently acquired a secondary company with its own Salesforce org (Org B). UC has decided to keep the orgs running separately but would like to bidirectionally share opportunities between the orgs in near-real time. Which 3 options should a data architect recommend to share data between Org A and Org B? (Choose 3 answers)
A. Leverage Heroku Connect and Heroku Postgres to bidirectionally sync Opportunities.
B. Install a 3rd party AppExchange tool to handle the data sharing
C. Develop an Apex class that pushes opportunity data between orgs daily via the Apex schedule
D. Leverage middleware tools to bidirectionally send Opportunity data across orgs.
E. Use Salesforce Connect and the cross-org adapter to visualize Opportunities into external objects
Explanation:
This question assesses knowledge of the various integration patterns and tools available for bi-directional, near-real-time synchronization between two separate Salesforce orgs.
🟢 Why A is Correct (Heroku Connect):
Heroku Connect is a powerful tool specifically designed for bi-directional sync between a Salesforce org and a Heroku Postgres database. The pattern would be: both Org A and Org B sync bi-directionally with the same Heroku Postgres database, which acts as the integration hub. This provides a robust and scalable solution for near-real-time data sharing.
🟢 Why B is Correct (3rd party AppExchange tool):
The AppExchange offers numerous pre-built applications (e.g., from Informatica, Dell Boomi, MuleSoft, or specialized tools like OwnBackup Replicate) that are designed specifically for org-to-org synchronization. These are excellent "buy" options that can accelerate implementation and reduce maintenance overhead.
🟢 Why D is Correct (Middleware tools):
As with the previous question, using a middleware tool (like MuleSoft, Jitterbit, Informatica) is a standard and highly recommended approach. The middleware would listen for changes in both orgs (via streaming API or polling) and synchronize them bi-directionally, handling transformation and conflict resolution. This provides maximum control and flexibility.
🔴 Why C is Incorrect (Apex class scheduled daily):
A daily scheduled Apex job does not meet the requirement for "near-real time" synchronization. Daily is batch-oriented and would result in significant data latency (up to 24 hours old). This solution also places the processing burden on the Salesforce platform and is more fragile and limit-prone than using a dedicated integration tool.
🔴 Why E is Incorrect (Salesforce Connect cross-org adapter):
Salesforce Connect with the cross-org adapter is an excellent solution, but it is read-only. It allows you to view (or "visualize") records from one org in another org via external objects. It does not support bi-directional synchronization or writing data back to the source org. Therefore, it does not meet the requirement to "bidirectionally share opportunities."
Reference:
The key is to distinguish between read-only solutions (E), batch solutions (C), and true bi-directional near-real-time solutions (A, B, D). Heroku Connect is a particularly important platform tool to know for this use case.
Universal Containers (UC) has an open sharing model for its Salesforce users to allow all its Salesforce internal users to edit all contacts, regardless of who owns the contact. However, UC management wants to allow only the owner of a contact record to delete that contact. If a user does not own the contact, then the user should not be allowed to delete the record. How should the architect approach the project so that the requirements are met?
A. Create a "before delete" trigger to check if the current user is not the owner.
B. Set the Sharing settings as Public Read Only for the Contact object.
C. Set the profile of the users to remove delete permission from the Contact object.
D. Create a validation rule on the Contact object to check if the current user is not the owner.
Explanation:
UC requires full edit access to all Contact records for internal users (via an open sharing model, such as Public Read/Write OWD), but deletion must be restricted to the record owner only. This demands granular control over delete operations without impacting edit or read permissions. Let’s evaluate each option:
Option A: Create a "before delete" trigger to check if the current user is not the owner.
This is the optimal approach. A before delete trigger on the Contact object can use UserInfo.getUserId() to compare the current user's ID against the OwnerId field of the records in Trigger.old. If they do not match, the trigger can call addError() to block the deletion and display a custom message (e.g., "You can only delete Contacts you own"). This enforces the owner-only deletion rule at the record level without altering sharing settings, profiles, or other permissions, preserving the open edit model.
Option B: Set the Sharing settings as Public Read Only for the Contact object.
Changing the Organization-Wide Default (OWD) to Public Read Only would prevent all users from editing Contacts (let alone deleting them), directly conflicting with the requirement for full edit access. This option does not allow selective deletion control.
Option C: Set the profile of the users to remove delete permission from the Contact object.
Removing the "Delete" object permission from user profiles would disable deletion for all Contacts across the board, not just for non-owners. This violates the requirement that owners can still delete their own records and does not provide record-level granularity.
Option D: Create a validation rule on the Contact object to check if the current user is not the owner.
Validation rules execute on insert, update, or upsert (via "before save" context) but do not fire on delete operations. They cannot prevent deletions, making this option ineffective for the requirement.
Why Option A is Optimal:
A before delete trigger provides precise, record-specific enforcement of the owner-only deletion rule while maintaining the open sharing model for reads and edits. This is a standard Salesforce best practice for custom delete restrictions, as declarative tools (e.g., sharing, profiles, validation rules) lack the flexibility for this scenario.
References:
Salesforce Documentation: Apex Triggers - Before Delete Context
Salesforce Developer Blog: Restricting Record Deletion Based on Ownership
Salesforce Architect Guide: Sharing and Permissions
Universal Containers wishes to maintain Lead data from Leads even after they are deleted and cleared from the Recycle Bin. What approach should be implemented to achieve this solution?
A. Use a Lead standard report and filter on the IsDeleted standard field.
B. Use a Converted Lead report to display data on Leads that have been deleted.
C. Query Salesforce with the queryAll API method or using the ALL ROWS SOQL keywords.
D. Send data to a Data Warehouse and mark Leads as deleted in that system.
Explanation:
UC needs a permanent retention strategy for Lead data beyond Salesforce's standard Recycle Bin (which holds deleted records for 15 days before permanent deletion). Once cleared from the Recycle Bin, data is irretrievable via standard Salesforce mechanisms without support intervention (which is limited and not scalable). Let’s analyze each option:
Option A: Use a Lead standard report and filter on the IsDeleted standard field.
The IsDeleted field can be used in SOQL reports to view soft-deleted records (those in the Recycle Bin), but it does not access data after permanent deletion from the Recycle Bin. Once permanently deleted, the records are no longer queryable, making this insufficient for long-term maintenance.
Option B: Use a Converted Lead report to display data on Leads that have been deleted.
Converted Lead reports focus on Leads that have been converted to Accounts/Opportunities/Contacts, not deleted ones. They do not provide visibility into deleted (let alone permanently deleted) Leads and are unrelated to the requirement.
Option C: Query Salesforce with the queryAll API method or using the ALL ROWS SOQL keywords.
queryAll() in the API or ALL ROWS in SOQL retrieves soft-deleted records (in the Recycle Bin) by including IsDeleted = true. However, this does not work for records permanently deleted from the Recycle Bin, as they are no longer stored in Salesforce's database. This approach fails for data cleared from the Recycle Bin.
Option D: Send data to a Data Warehouse and mark Leads as deleted in that system.
This is the best solution. Implement an integration (e.g., via ETL tools, Change Data Capture, or Outbound Messages) to replicate Lead data to an external Data Warehouse (e.g., Snowflake, AWS Redshift) in real-time or near-real-time. When a Lead is deleted in Salesforce, update the warehouse record with a "deleted" flag or soft-delete marker. This ensures permanent retention and accessibility even after Recycle Bin clearance, allowing UC to maintain historical data for compliance, reporting, or auditing without relying on Salesforce's limited recovery options.
Why Option D is Optimal:
Salesforce does not natively support indefinite retention post-Recycle Bin clearance, so external archiving to a Data Warehouse provides a scalable, auditable solution. It aligns with data governance best practices for large-scale orgs, enabling queries on historical Lead data indefinitely while marking deletions appropriately.
References:
Salesforce Documentation: Recycle Bin and Data Recovery
Salesforce Developer Guide: SOQL ALL ROWS Limitations (confirms no access post-permanent deletion)
Salesforce Architect Guide: Data Archiving and Retention
Universal Containers (UC) is implementing Salesforce and will be using Salesforce to track customer complaints, provide white papers on products, and provide subscription based support. Which license type will UC users need to fulfill UC's requirements?
A. Sales Cloud License
B. Lightning Platform Starter License
C. Service Cloud License
D. Salesforce License
Explanation:
UC's use cases center on customer service: tracking complaints (case management), providing white papers (knowledge base for self-service or agent support), and subscription-based support (entitlements and service contracts). The appropriate license must support these service-oriented features. Let’s evaluate each option:
❌ Option A: Sales Cloud License
Sales Cloud focuses on sales processes (e.g., leads, opportunities, forecasting), not service features like case management or knowledge articles for complaints and support. It lacks built-in tools for subscription support or complaint tracking.
❌ Option B: Lightning Platform Starter License
This is a low-cost license for custom app development on the Lightning Platform, with limited access to standard objects like Cases or Knowledge. It does not include service-specific features for complaints, white papers, or subscription support, making it unsuitable.
✅ Option C: Service Cloud License
This is the correct choice. Service Cloud provides comprehensive tools for customer service, including Case object for tracking complaints, Knowledge base for white papers and product documentation, and Entitlements/Contracts for subscription-based support. It enables agents to handle support requests efficiently, aligning perfectly with UC's requirements.
❌ Option D: Salesforce License
"Salesforce License" is not a standard edition; it may refer to the legacy full CRM license (now superseded by Sales or Service Cloud). It does not specifically address service functionalities like complaint tracking or knowledge management.
✅ Why Option C is Optimal:
Service Cloud is designed for customer support scenarios, offering native features for cases (complaints), knowledge articles (white papers), and service entitlements (subscriptions). This ensures UC can implement a unified service platform without needing multiple licenses.
References:
Salesforce Documentation: Service Cloud Overview
Salesforce Help: Case Management and Knowledge
Universal Containers (UC) requires 2 years of customer related cases to be available on SF for operational reporting. Any cases older than 2 years and upto 7 years need to be available on demand to the Service agents. UC creates 5 million cases per yr. Which 2 data archiving strategies should a data architect recommend? Choose 2 options:
A.
Use custom objects for cases older than 2 years and use nightly batch to move them.
B.
Sync cases older than 2 years to an external database, and provide access to Service agents to the database
C.
Use Big objects for cases older than 2 years, and use nightly batch to move them.
D.
Use Heroku and external objects to display cases older than 2 years and bulk API to hard delete from Salesforce.
Use Big objects for cases older than 2 years, and use nightly batch to move them.
Use Heroku and external objects to display cases older than 2 years and bulk API to hard delete from Salesforce.
Explanation:
✅ C. Use Big objects for cases older than 2 years, and use nightly batch to move them.
Big Objects are designed to handle massive amounts of data that do not need to be accessed frequently, which makes them ideal for storing historical data like cases older than 2 years. They support standard querying via SOQL with some limitations and are cost-effective for long-term storage. A nightly batch job ensures that eligible data is moved regularly.
✅ D. Use Heroku and external objects to display cases older than 2 years and bulk API to hard delete from Salesforce.
Heroku with external objects (via Salesforce Connect) is a good strategy for providing on-demand access to historical data that is stored outside Salesforce. This method maintains Salesforce data volume limits and performance, and Bulk API can be used to delete old records after they’ve been archived externally.
❌ A. Use custom objects for cases older than 2 years and use nightly batch to move them.
This increases storage usage in Salesforce and does not significantly reduce org size. It also lacks the querying performance benefits of Big Objects or external systems.
❌ B. Sync cases older than 2 years to an external database, and provide access to Service agents to the database
While viable in concept, this lacks seamless integration within the Salesforce UI. Service agents would need to leave Salesforce to access case data, which hurts productivity.
📚 Reference:
Salesforce Help: Big Objects Overview
Salesforce Architect Guide: Data Archiving Strategies
Universal Containers has two systems. Salesforce and an on -premise ERP system. An architect has been tasked with copying Opportunity records to the ERP once they reach a Closed/Won Stage. The Opportunity record in the ERP system will be read-only for all fields copied in from Salesforce. What is the optimal real-time approach that achieves this solution?
A.
Implement a Master Data Management system to determine system of record.
B.
Implement a workflow rule that sends Opportunity data through Outbound Messaging.
C.
Have the ERP poll Salesforce nightly and bring in the desired Opportunities.
D.
Implement an hourly integration to send Salesforce Opportunities to the ERP system.
Implement a workflow rule that sends Opportunity data through Outbound Messaging.
Explanation:
✅ B. Implement a workflow rule that sends Opportunity data through Outbound Messaging.
Outbound Messaging is a native point-and-click feature that supports real-time integration (or near real-time) without requiring Apex code. It’s ideal for one-way data transfers like copying Closed/Won Opportunities to a read-only ERP system.
❌ A. Implement a Master Data Management system
MDM is overkill for this use case. It adds unnecessary complexity when Salesforce is clearly the system of record for Opportunities.
❌ C. Have the ERP poll Salesforce nightly
Polling is not real-time and is resource inefficient. It can also miss near-term updates or cause synchronization delays.
❌ D. Implement an hourly integration
An hourly schedule is not considered "real-time". Outbound Messaging provides immediate updates, which is the core requirement here.
Reference:
Salesforce Help: Outbound Messaging Overview
Salesforce Integration Patterns and Practices
Universal Containers (UC) has a custom discount request object set as a detail object with a custom product object as the master. There is a requirement to allow the creation of generic discount requests without the custom product object as its master record. What solution should an Architect recommend to UC?
A.
Mandate the selection of a custom product for each discount request.
B.
Create a placeholder product record for the generic discount request.
C.
Remove the master-detail relationship and keep the objects separate.
D.
Change the master-detail relationship to a lookup relationship.
Change the master-detail relationship to a lookup relationship.
Explanation:
✅ D. Change the master-detail relationship to a lookup relationship.
Master-detail relationships require the detail record to have a parent. To allow creation of standalone discount requests, a lookup relationship is appropriate. It allows flexibility—linking to a custom product when applicable and remaining unlinked otherwise.
❌ A. Mandate the selection of a custom product
This violates the requirement for "generic" discount requests which must exist without a product.
❌ B. Create a placeholder product record
Workarounds like a fake “Generic Product” create messy data, skew reports, and complicate governance. Not sustainable.
❌ C. Remove the master-detail relationship and keep the objects separate
Removing the relationship entirely would break reporting and data integrity when a discount request does need to be tied to a product. Lookup gives optionality without losing relational structure.
📚 Reference:
Salesforce Help: Relationship Considerations
Salesforce Object Relationships Overview
Universal Containers wants to develop a dashboard in Salesforce that will allow Sales Managers to do data exploration using their mobile device (i.e., drill down into sales-related data) and have the possibility of adding ad-hoc filters while on the move. What is a recommended solution for building data exploration dashboards in Salesforce?
A.
Create a Dashboard in an external reporting tool, export data to the tool, and add link to the dashboard in Salesforce.
B.
Create a Dashboard in an external reporting tool, export data to the tool, and embed the dashboard in Salesforce using the Canval toolkit.
C.
Create a standard Salesforce Dashboard and connect it to reports with the appropriate filters.
D.
Create a Dashboard using Analytics Cloud that will allow the user to create ad-hoc lenses and drill down.
Create a Dashboard using Analytics Cloud that will allow the user to create ad-hoc lenses and drill down.
Explanation:
This question tests the knowledge of the right analytics tool for the job, specifically focusing on advanced features like ad-hoc exploration and mobile usability.
Why D is Correct: Salesforce Analytics Cloud (Tableau CRM) is specifically designed for this purpose. It provides:
✔️ Advanced Data Exploration: Users can create ad-hoc "lenses" to explore data dynamically without a pre-built report.
✔️ Powerful Drill-Down: It allows users to start from a high-level dashboard and interactively drill down into the underlying details by clicking on data points.
✔️ Mobile-First Design: Analytics Cloud dashboards are built to be fully functional and interactive on mobile devices, perfectly matching the requirement for managers "on the move."
✔️ Smart Filtering: It supports adding and changing filters on the fly for true exploratory analysis.
Why A & B are Incorrect (External Dashboard): While external tools are powerful, they introduce complexity.
→ Exporting data to an external system creates a data silo, adds latency, and requires managing a separate security model.
→ Embedding an external dashboard (e.g., via Canvas) often results in a clunky user experience, especially on mobile, and may not provide the seamless, native integration required for the best mobile exploration.
Why C is Incorrect (Standard Salesforce Dashboard): Standard Salesforce dashboards are excellent for monitoring predefined KPIs based on standard reports. However, they are not designed for ad-hoc data exploration.
→ Users cannot create new filters or change the grouping of data on the fly from a mobile device; they can only interact with the filters that were pre-configured.
→ The drill-down capabilities are much more limited compared to Analytics Cloud.
Reference: The key differentiator is "ad-hoc filters while on the move." This is the core value proposition of Salesforce Analytics Cloud/Tableau CRM over standard reporting.
Page 7 out of 22 Pages |
Previous |