Data-Cloud-Consultant Practice Test Questions

161 Questions


Which statement about Data Cloud's Web and Mobile Application Connector is true?


A. A standard schema containing event, profile, and transaction data is created at the time the connector is configured.


B. The Tenant Specific Endpoint is auto-generated in Data Cloud when setting the connector.


C. Any data streams associated with the connector will be automatically deleted upon deleting the app from Data Cloud Setup.


D. The connector schema can be updated to delete an existing field.





B.
  The Tenant Specific Endpoint is auto-generated in Data Cloud when setting the connector.

Explanation:
The Web and Mobile Application Connector in Salesforce Data Cloud enables ingestion of engagement and profile data from websites or apps via SDKs. During setup, it auto-generates a unique Tenant Specific Endpoint—a secure URL for data transmission. This endpoint is essential for SDK initialization and ensures tenant isolation. Unlike schema creation, which requires user-uploaded JSON, or deletions requiring manual steps, this auto-generation simplifies secure connectivity without manual URL configuration.

Correct Option:

B. The Tenant Specific Endpoint is auto-generated in Data Cloud when setting the connector.
This endpoint is automatically created upon configuring the connector in Data Cloud Setup under Websites & Mobile Apps. It serves as the ingestion URL (e.g., https://yourtenant-specific-endpoint.salesforce.com), used by SDKs to send events. This process ensures secure, isolated data flow and is displayed immediately on the app details page for copy-paste into app code, streamlining integration without custom endpoint management.

Incorrect Options:

A. A standard schema containing event, profile, and transaction data is created at the time the connector is configured.
No automatic schema creation occurs; users must upload a custom JSON schema file defining event types, fields, and categories during setup. Data Cloud provides templates for common use cases like e-commerce, but the schema is user-defined to match app data structures, ensuring flexibility for engagement, profile, or transaction events.

C. Any data streams associated with the connector will be automatically deleted upon deleting the app from Data Cloud Setup.
Deleting the app requires first manually deleting associated data streams, as Data Cloud prompts a warning to prevent data loss. Streams are independent objects for data mapping and ingestion; automatic deletion isn't supported to avoid unintended disruptions to ongoing data flows.

D. The connector schema can be updated to delete an existing field.
Schema updates are additive only—you can add events or fields but must retain all existing ones to maintain data consistency and avoid breaking active data streams. Deleting fields requires recreating the connector with a new schema, as Data Cloud enforces immutability for stability in production environments.

Reference:
Salesforce Developer Documentation: Tenant Specific Endpoint; Connect a Website or Mobile App; Delete a Website or Mobile Connector App.

Where is value suggestion for attributes in segmentation enabled when creating the DMO?


A. Data Mapping


B. Data Transformation


C. Segment Setup


D. Data Stream Setup





A.
  Data Mapping

Explanation:
Value suggestions in segmentation help users quickly select common or expected attribute values when building segments. These suggestions come from the data mapped into Data Cloud’s Data Model Objects (DMOs). The feature is enabled during Data Mapping, where the system analyzes mapped attributes and their values. By configuring this correctly at the DMO creation stage, segmentation benefits from intelligent value recommendations that accelerate segment building.

Correct Option:

A. Data Mapping:
Value suggestions are enabled during Data Mapping because this is where attributes from ingested data streams are mapped to DMO fields. When the system sees distributions and patterns in mapped attribute values, it activates value suggestions for use in segmentation. This ensures users receive relevant value recommendations based on real data, enhancing accuracy and efficiency when building segments.

Incorrect Options:

B. Data Transformation:
Data Transformation handles cleansing, restructuring, and normalization of data before mapping. It does not control how segmentation value suggestions are generated. Although transformations affect the data quality, the enabling of value suggestions happens only after attributes are mapped into the DMO.

C. Segment Setup:
Segment Setup defines segmentation logic and activation capabilities but does not influence whether value suggestions are enabled. Suggestions must already be prepared from mapped attributes; they are not activated at the segmentation stage.

D. Data Stream Setup:
Data Stream Setup is used to configure ingestion sources, schedules, and categories. It does not enable value suggestion functionality. Value suggestions depend on attribute mapping to DMOs, which occurs after stream setup.

Reference:
Salesforce Data Cloud — Data Mapping & Segmentation Attribute Suggestion Documentation

Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new line of business. The new business specializes in gourmet camping food. For business reasons as well as security reasons, it's important to NTO to keep all Data Cloud data separated by brand. Which capability best supports NTO's desire to separate its data by brand?


A. Data sources for each brand


B. Data model objects for each brand


C. Data spaces for each brand


D. Data streams for each brand





C.
  Data spaces for each brand

Explanation:
NTO's requirement is for logical data separation and security between its two brands within a single Data Cloud org. This is a tenant-level isolation need, not just a matter of having separate ingestion paths or data models. The capability must enforce that data, segments, and insights for one brand are completely inaccessible from the context of the other, while allowing centralized platform management.

Correct Option:

C. Data spaces for each brand:
This is correct. Data Spaces are specifically designed for this multi-brand or multi-business unit use case. They provide logical partitioning within a single Data Cloud org, creating separate, secure environments. Each brand (Outdoor Clothing, Gourmet Food) would have its own Data Space, ensuring complete data isolation, security, and dedicated business processes.

Incorrect Option:

A. Data sources for each brand:
Using separate data sources only manages how data is ingested. Once ingested, the data would reside in a common data lake and would not be automatically isolated by brand, failing the security requirement.

B. Data model objects for each brand:
Creating separate model objects (e.g., Gourmet_Customer__dlm, Clothing_Customer__dlm) organizes the schema but does not enforce data security or prevent users with access to one object from seeing the other. It is a structural choice, not an isolation capability.

D. Data streams for each brand:
Data Streams are connectors for bringing data in from external storage. Like data sources, they are an ingestion concern and do not provide any data security or logical separation once the data lands in the platform.

Reference:
Salesforce Help - "What Are Data Spaces?"

A user wants to be able to create a multi-dimensional metric to identify unified individual lifetime value (LTV). Which sequence of data model object (DMO) joins is necessary within the calculated Insight to enable this calculation?


A. Unified Individual > Unified Link Individual > Sales Order


B. Unified Individual > Individual > Sales Order


C. Sales Order > Individual > Unified Individual


D. Sales Order > Unified Individual





A.
  Unified Individual > Unified Link Individual > Sales Order

Explanation:
Lifetime Value (LTV) is calculated per unified customer (Unified Individual), but the actual revenue comes from the Sales Order DMO. To aggregate order value at the unified customer level, the Calculated Insight must join Unified Individual → Unified Link Individual → Sales Order. The Unified Link Individual table is the required bridge that connects each Unified Individual to all its source Individuals (from different data sources), and those Individuals are directly linked to their Sales Orders. Without this bridge, multi-source revenue cannot be correctly attributed to the unified profile.

Correct Option:

A. Unified Individual > Unified Link Individual > Sales Order:
This is the only path that correctly rolls up revenue from potentially multiple source systems to the single Unified Individual. Unified Link Individual acts as the many-to-many resolution table linking one Unified Individual to all its constituent Individuals. Each Individual then links to its Sales Order records, enabling accurate, de-duplicated LTV calculations across all data sources.

Incorrect Options:

B. Unified Individual > Individual > Sales Order:
There is no direct relationship from Unified Individual to the source Individual DMO. Skipping Unified Link Individual prevents proper resolution when a unified profile contains records from multiple source systems, leading to incomplete or duplicated revenue.

C. Sales Order > Individual > Unified Individual:
While technically possible to start from Sales Order, this direction is not recommended for LTV because it can create fan-out duplication if the same Individual belongs to multiple Unified Individuals during processing. Starting from Unified Individual ensures one-row-per-customer context.

D. Sales Order > Unified Individual:
No direct relationship exists between Sales Order and Unified Individual. Sales Orders are always linked to the source Individual (or Party), not directly to the resolved Unified Individual, making this join impossible in the data model.

Reference:
Salesforce Help: “Calculated Insights – Required Joins for Cross-Source Metrics” and Data Model Reference diagram showing Unified Link Individual as the mandatory bridge for any aggregation from source transactional DMOs (Sales Order, Engagement, etc.) to Unified Individual.

During a privacy law discussion with a customer, the customer indicates they need to honor requests for the right to be forgotten. The consultant determines that Consent API will solve this business need. Which two considerations should the consultant inform the customer about? (Choose 2 answers)


A. Data deletion requests are reprocessed at 30, 60, and 90 days.


B. Data deletion requests are processed within 1 hour.


C. Data deletion requests are submitted for Individual profiles.


D. Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds.





B.
  Data deletion requests are processed within 1 hour.

C.
  Data deletion requests are submitted for Individual profiles.

Explanation:
Right-to-be-forgotten requests require the permanent deletion of an individual’s data from systems where personal information is stored. Salesforce’s Consent API supports these deletion requests within Data Cloud. It processes data deletion quickly and operates at the Individual profile level. Consultants must understand the timing and the scope of deletion to ensure customers comply with privacy regulations and manage expectations about how Data Cloud handles and propagates deletion.

Correct Options:

B. Data deletion requests are processed within 1 hour:
Deletion requests submitted through the Consent API are generally processed within about an hour. This timely processing supports regulatory compliance, ensuring personal data is removed from Data Cloud quickly. The processing window is automated and reliable, enabling businesses to confidently respond to privacy requests without extensive manual intervention.

C. Data deletion requests are submitted for Individual profiles:
In Data Cloud, deletions occur at the Individual level—the unified profile containing attributes gathered from multiple data sources. This ensures all associated personal data segments, identity-resolved attributes, and connected objects tied to that Individual are removed. Submitting deletion requests at this level ensures comprehensive compliance with right-to-be-forgotten regulations.

Incorrect Options:

A. Data deletion requests are reprocessed at 30, 60, and 90 days:
This is incorrect because deletion requests are not reprocessed on a 30/60/90-day schedule. The deletion workflow is triggered promptly, typically completing within an hour, and does not require or include periodic reprocessing cycles.

D. Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds:
This is incorrect. Deletion requests processed in Data Cloud do not automatically propagate into other Salesforce clouds (e.g., Sales Cloud, Service Cloud, Marketing Cloud). Each system requires its own deletion mechanism, and the Consent API does not cascade deletions across clouds.

Reference:
Salesforce Data Cloud — Consent API & Data Deletion (Right to be Forgotten) Documentation

Which data model subject area defines the revenue or quantity for an opportunity by product family?


A. Engagement


B. Sales Order


C. Product


D. Party





B.
  Sales Order

Explanation:
In the Salesforce Data Model, a subject area groups related objects for business analysis. The question describes a scenario of tracking the financial outcome (revenue/quantity) of a sales opportunity, broken down by the type of product sold (product family). This is a direct representation of a sales order or transaction line item, not just the product itself or the general engagement.

Correct Option:

B. Sales Order:
This is correct. The Sales Order subject area is centered on the transaction and its line items. It contains objects like Order and OrderItem (or Opportunity and OpportunityLineItem in the core Salesforce model), which are precisely where you define the revenue amount and quantity for each product family associated with an opportunity.

Incorrect Option:

A. Engagement:
This subject area focuses on customer interactions and touchpoints, such as email opens, service cases, or web visits. It does not contain the transactional data for revenue and quantity by product.

C. Product:
This subject area defines the master data about what is being sold—the product definitions, hierarchies, and categories (like Product Family). However, it does not store the transactional metrics of how much was sold for a specific opportunity.

D. Party:
This subject area deals with the individuals and organizations involved in business, such as Customers, Contacts, and Employees (the "who"). It does not contain the transactional line-item details of a sale.

Reference:
Salesforce Data Model Guide - "Sales Order Subject Area"

A customer has outlined requirements to trigger a journey for an abandoned browse behavior. Based on the requirements, the consultant determines they will use streaming insights to trigger a data action to Journey Builder every hour. How should the consultant configure the solution to ensure the data action is triggered at the cadence required?


A. Set the activation schedule to hourly.


B. Configure the data to be ingested in hourly batches.


C. Set the journey entry schedule to run every hour.


D. Set the insights aggregation time window to 1 hour.





D.
  Set the insights aggregation time window to 1 hour.

Explanation:
In Salesforce Data Cloud, streaming insights process real-time engagement data like abandoned browse behavior to detect patterns within a defined rolling time window. For hourly triggers to Journey Builder via data actions, the aggregation time window must be set to 1 hour, ensuring insights recompute and evaluate rules every hour. This controls the cadence of data action execution, enabling timely journey entry without relying on ingestion batches or unrelated schedules, thus aligning with the customer's requirement for efficient, event-driven orchestration.

Correct Option:

D. Set the insights aggregation time window to 1 hour:
Streaming insights use a configurable rolling window (minimum 1 minute to 24 hours) to aggregate streaming data like web/mobile events. Setting it to 1 hour causes the insight to refresh hourly, re-evaluating conditions (e.g., abandonment criteria) and triggering associated data actions to Journey Builder if met. This directly governs the trigger frequency, supports real-time behaviors, and integrates seamlessly with Marketing Cloud for automated journeys without custom coding.

Incorrect Options:

A. Set the activation schedule to hourly:
Activations publish segment data to targets like Marketing Cloud at scheduled intervals, but they do not trigger data actions based on streaming insights. Data actions are event-driven via insight rules, not activation schedules, so this would not achieve the required hourly cadence for journey triggers.

B. Configure the data to be ingested in hourly batches:
Ingestion batching applies to bulk data streams, not streaming sources like Web/Mobile SDKs for real-time events. Streaming data is continuous, and batching would delay processing, contradicting the near-real-time needs of abandoned behavior detection and hourly action triggers.

C. Set the journey entry schedule to run every hour:
Journey Builder entry sources (e.g., API events from data actions) are typically event-based, not scheduled. Scheduling the entry would poll for data hourly, adding unnecessary latency and inefficiency compared to insight-driven triggers, and it doesn't leverage Data Cloud's streaming capabilities.

Reference:
Salesforce Help: “Streaming Insights Overview” – Details aggregation windows and data action triggers for real-time orchestration to Journey Builder.

A new user of Data Cloud only needs to be able to review individual rows of ingested data and validate that it has been modeled successfully to its linked data model object. The user will also need to make changes if required. What is the minimum permission set needed to accommodate this use case?


A. Data Cloud for Marketing Specialist


B. Data Cloud Admin


C. Data Cloud for Marketing Data Aware Specialist


D. Data Cloud User





C.
  Data Cloud for Marketing Data Aware Specialist

Explanation:
A user who needs to review ingested data, inspect individual rows, validate mapping to Data Model Objects (DMOs), and make adjustments requires permissions beyond simple viewing but not full administrative access. The Data Cloud for Marketing Data Aware Specialist permission set is designed for users who work hands-on with data streams, mappings, and validation tasks. It grants visibility into modeled data and allows making changes without giving full admin privileges.

Correct Option:

C. Data Cloud for Marketing Data Aware Specialist:
This permission set is tailored for users involved in data operations. It enables reviewing data stream records, checking how fields map to DMOs, monitoring ingest quality, and making adjustments to mappings when needed. It provides more granular data-level access than basic Data Cloud User but avoids broader admin powers, making it the minimum set that fulfills the stated requirements effectively.

Incorrect Options:

A. Data Cloud for Marketing Specialist:
This permission set focuses primarily on segmentation, activation, and marketing use cases rather than deep data inspection. It does not offer sufficient permissions to review row-level ingested data or modify data modeling configurations. Therefore, it cannot meet the user's data validation and adjustment needs.

B. Data Cloud Admin:
Although this permission set covers all capabilities, including data ingestion, modeling, identity resolution, and activation, it is far more powerful than required. Granting full administrative access would violate the principle of least privilege and is unnecessary for a user whose responsibilities are limited to validating and adjusting modeled data.

D. Data Cloud User:
This permission set allows basic access to Data Cloud features but does not provide the ability to inspect individual ingested rows or make changes to mappings or modeling. It is insufficient for users performing detailed data validation or operational tasks.

Reference:
Salesforce Data Cloud — Permission Set Overview: Data Aware Specialist, Admin, and Marketing Roles

Northern Trail Outfitters wants to implement Data Cloud and has several use cases in mind. Which two use cases are considered a good fit for Data Cloud? Choose 2 answers


A. To ingest and unify data from various sources to reconcile customer identity


B. To create and orchestrate cross-channel marketing messages


C. To use harmonized data to more accurately understand the customer and business impact


D. To eliminate the need for separate business intelligence and IT data management tools





A.
  To ingest and unify data from various sources to reconcile customer identity

C.
  To use harmonized data to more accurately understand the customer and business impact

Explanation:
Data Cloud's primary strengths are data unification, identity resolution, and creating a single, actionable customer profile. It is designed to ingest data from multiple sources, create a "golden record" for each customer, and make that unified data available for analysis and activation across the Salesforce Platform. Use cases that leverage these core functionalities are its best fit.

Correct Option:

A. To ingest and unify data from various sources to reconcile customer identity:
This is a foundational use case for Data Cloud. Its core engine is built to ingest data from diverse sources (e.g., CRM, e-commerce, loyalty platforms) and use identity resolution rules to merge duplicate records, creating a single, trusted customer view.

C. To use harmonized data to more accurately understand the customer and business impact:
Once data is unified, Data Cloud enables powerful analysis through Calculated Insights and segments. This allows businesses to gain a holistic understanding of customer behavior, value, and the overall impact of business initiatives, which is a primary goal of the platform.

Incorrect Option:

B. To create and orchestrate cross-channel marketing messages:
While Data Cloud feeds this use case by providing the unified audience segments, the actual orchestration of messages is the primary function of Marketing Cloud Engagement or Journeys. Data Cloud is the data foundation that enables targeting, not the execution engine for the campaigns themselves.

D. To eliminate the need for separate business intelligence and IT data management tools:
This is incorrect and overstates Data Cloud's role. It is not designed to replace specialized data warehouses (like Snowflake), ETL tools (like Informatica), or enterprise BI platforms (like Tableau). Instead, it complements them by serving as a real-time customer data platform that feeds these systems with unified profiles.

Reference:
Salesforce Architect - "Data Cloud Use Cases"

A company stores customer data in Marketing Cloud and uses the Marketing Cloud Connector to ingest data into Data Cloud. Where does a request for data deletion or right to be forgotten get submitted?


A. In Data Cloud settings


B. On the individual data profile in Data Cloud


C. In Marketing Cloud settings


D. through Consent API





C.
  In Marketing Cloud settings

Explanation:
When using the Marketing Cloud Connector to ingest customer data into Salesforce Data Cloud, data deletion requests (e.g., right to be forgotten under GDPR/CCPA) must be managed at the source to ensure comprehensive compliance. Data Cloud does not natively support direct deletions for ingested data from connectors; instead, deletions are handled in the originating system (Marketing Cloud). This propagates deletions to Data Cloud via the connector's synchronization, preventing data resurrection on subsequent syncs and maintaining a single point of control for privacy requests.

Correct Option:

C. In Marketing Cloud settings:
Marketing Cloud provides dedicated privacy management tools, including the "Contact Deletion" feature under Setup > Privacy Management, where users can submit bulk or individual right-to-be-forgotten requests. For connector-ingested data, deleting contacts in Marketing Cloud triggers automatic removal from Data Cloud during the next sync cycle (typically hourly or as configured). This ensures end-to-end compliance without manual intervention in Data Cloud, as the connector respects source deletions to avoid re-ingestion of deleted records.

Incorrect Options:

A. In Data Cloud settings:
Data Cloud's global settings (e.g., under Setup > Data Cloud Settings) handle ingestion configurations, permissions, and general compliance toggles but do not process individual deletion requests. Bulk operations like DMO deletions are possible for managed data, but for connector sources like Marketing Cloud, changes must originate there to sync properly.

B. On the individual data profile in Data Cloud:
Individual profiles in Data Cloud allow viewing unified data and basic actions like exporting, but no "delete" or "forget" button exists for privacy requests. Attempting manual edits or suppressions here would be overwritten by connector syncs, making it ineffective and non-compliant for source-managed data.

D. through Consent API:
The Consent API manages opt-in/opt-out preferences and granular consent revocation, but it does not handle full data deletion or right-to-be-forgotten requests. It's designed for ongoing consent signals, not erasure, and would not trigger removal from Marketing Cloud or synced Data Cloud profiles.

Reference:
Salesforce Help: “Delete Contacts in Marketing Cloud for Data Cloud Compliance” – Explains source-system deletion propagation via connectors.

A Data Cloud consultant recently discovered that their identity resolution process is matching individuals that share email addresses or phone numbers, but are not actually the same individual. What should the consultant do to address this issue?


A. Modify the existing ruleset with stricter matching criteria, run the ruleset and review the updated results, then adjust as needed until the individuals are matching correctly.


B. Create and run a new rules fewer matching rules, compare the two rulesets to review and verify the results, and then migrate to the new ruleset once approved.


C. Create and run a new ruleset with stricter matching criteria, compare the two rulesets to review and verify the results, and then migrate to the new ruleset once approved.


D. Modify the existing ruleset with stricter matching criteria, compare the two rulesets to review and verify the results, and then migrate to the new ruleset once approved.





C.
  Create and run a new ruleset with stricter matching criteria, compare the two rulesets to review and verify the results, and then migrate to the new ruleset once approved.

Explanation:
When identity resolution incorrectly matches individuals, it indicates that the current ruleset is too broad or permissive. The correct approach is to create a new ruleset with stricter match criteria, run it in parallel, and compare results to the existing ruleset. This controlled approach prevents disruption to production data and allows validation before fully switching. Once verified, the consultant can migrate to the improved ruleset.

Correct Option:

C. Create and run a new ruleset with stricter matching criteria, compare the two rulesets to review and verify the results, and then migrate to the new ruleset once approved:
This is the recommended best practice because identity resolution should not be adjusted directly in production. Creating a new ruleset allows the team to test stricter criteria—such as requiring additional attributes beyond email or phone—without impacting current unified profiles. Comparing ruleset outputs ensures accuracy before fully deploying the updated logic.

Incorrect Options:

A. Modify the existing ruleset with stricter matching criteria, run the ruleset and review results, then adjust as needed:
This is risky because modifying the active ruleset immediately impacts live unified profiles. If the new conditions are incorrect or too strict, it may cause accidental fragmentation of identities or unintended updates. Testing should occur in a duplicate ruleset, not the active one.

B. Create and run a new ruleset with fewer matching rules, compare results, then migrate once approved:
This option worsens the problem. Fewer matching rules usually increases false-positive matches. The issue already stems from overly broad matching logic, so reducing rules would lead to even more incorrect identity merges.

D. Modify the existing ruleset with stricter matching criteria, compare results, then migrate:
Like option A, this incorrectly changes the active ruleset. You cannot compare results if you overwrite the original logic. Without a secondary ruleset for testing, there is no safe way to evaluate improvement.

Reference:
Salesforce Data Cloud — Identity Resolution Best Practices: Testing New Rulesets & Controlled Migration

Which functionality does Data Cloud offer to improve customer support interactions when a customer is working with an agent?


A. Predictive troubleshooting


B. Enhanced reporting tools


C. Real-time data integration


D. Automated customer service replies





C.
  Real-time data integration

Explanation:
Salesforce Data Cloud enhances customer support by unifying data across touchpoints, enabling agents to access a comprehensive, real-time 360-degree view of the customer. This functionality—Real-time data integration—pulls in live behavioral, transactional, and engagement data into Service Cloud consoles, allowing agents to personalize interactions instantly (e.g., viewing recent purchases or sentiment). Unlike predictive or automated features, it focuses on seamless data flow, reducing resolution times and improving satisfaction without requiring separate reporting or reply tools.

Correct Option:

C. Real-time data integration:
Data Cloud's core strength is ingesting and unifying data from multiple sources (CRM, external apps, streaming events) in near real-time, then surfacing it via APIs or connectors to Service Cloud. This provides agents with up-to-the-minute context during calls or chats, such as live purchase history or open issues, enabling proactive support. Integration is configured via Data Streams and Identity Resolution, ensuring a single customer profile for faster, informed resolutions.

Incorrect Options:

A. Predictive troubleshooting:
While Einstein AI in Data Cloud offers predictive scoring (e.g., churn risk), it doesn't directly provide "troubleshooting" for agents; that's more aligned with Einstein Case Classification in Service Cloud. Data Cloud focuses on data unification, not built-in diagnostic predictions for support workflows.

B. Enhanced reporting tools:
Reporting is handled via Tableau CRM or standard dashboards in Salesforce, with Data Cloud providing data sources for them. However, it doesn't offer "enhanced" tools specifically for support agents; real-time interaction benefits come from live data access, not retrospective reports.

D. Automated customer service replies:
Automation like AI-generated responses is a feature of Einstein Bots or Flow Builder in Service Cloud, not Data Cloud. Data Cloud supplies the underlying customer data to power these automations but doesn't create or manage replies itself.

Reference:
Salesforce Help: “Integrate Data Cloud with Service Cloud for Agent Productivity” – Covers real-time data sharing for support scenarios.


Page 4 out of 14 Pages
Previous