A company is planning on sending orders from Salesforce to a fulfillment system. The integration architect has been asked to plan for the integration. Which two questions should the integration architect consider?
Choose 2 answers
A.
Can the fulfillment system create new addresses within the Order Create service?
B.
Can the fulfillment system make a callback into Salesforce?
C.
Can the fulfillment system implement a contract-first Outbound Messaging interface?
D.
Is the product catalog data identical at all times in both systems?
Can the fulfillment system make a callback into Salesforce?
Is the product catalog data identical at all times in both systems?
Explanation
When planning a Salesforce-to-fulfillment system integration for sending orders, the Integration Architect must focus on data synchronization, system capabilities, and interaction patterns. The two most critical questions are B and D, as they directly impact integration design, reliability, and data consistency.
B. Can the fulfillment system make a callback into Salesforce?
Why this is correct:
Many fulfillment workflows require bidirectional communication.
Example: After receiving an order, the fulfillment system may need to update order status (e.g., “Shipped”, “Backordered”) or send tracking numbers back to Salesforce.
If callbacks are supported, the architect can design asynchronous updates using:
REST/SOAP APIs from fulfillment → Salesforce
Platform Events or Outbound Messages (if Salesforce initiates)
Apex Callouts + Named Credentials
If not supported, the solution must rely on polling, batch sync, or middleware (e.g., MuleSoft, Boomi), increasing complexity and latency.
This question determines whether real-time status sync is feasible — a common business requirement.
D. Is the product catalog data identical at all times in both systems?
Why this is correct:
Orders reference Product SKUs, Prices, Descriptions, Taxes, etc.
If catalog data diverges (e.g., price changes in Salesforce but not in fulfillment), it leads to:
Rejected orders
Pricing disputes
Reconciliation issues
The architect must clarify:
Which system is the source of truth for products?
Is real-time sync needed (via API, Platform Events, CDC)?
Or is nightly batch sync acceptable?
This drives decisions on:
Master Data Management (MDM)
Data mapping and transformation
Error handling for mismatches
Mismatched catalog data is one of the top causes of integration failures in order-to-fulfillment scenarios.
Why the other options are incorrect:
A. Can the fulfillment system create new addresses within the Order Create service?
This is a secondary detail, not a planning priority.
Address creation is typically handled in Salesforce (source of truth for customer data).
Even if supported, it’s a feature, not a core architectural decision.
This comes up during API contract design, not initial planning.
C. Can the fulfillment system implement a contract-first Outbound Messaging interface?
Outbound Messaging is a Salesforce-specific push mechanism using SOAP.
It requires the external system to host a public SOAP endpoint — rare in modern APIs.
Most fulfillment systems expect REST, not SOAP.
Contract-first applies to WSDL, but Outbound Messaging is Salesforce-initiated, not a mutual contract.
Better alternatives: Platform Events, Apex REST callouts, or middleware.
This is a tactical implementation question, not a strategic planning one.
References:
Trailhead – Integration Architect:
Plan Your Integration → “Ask: What data needs to flow in which direction?”
https://trailhead.salesforce.com/content/learn/modules/integration-architect-planning
Salesforce Integration Patterns:
Remote Process Invocation – Request and Reply vs Fire and Forgethttps://architect.salesforce.com/design/decision-guides/remote-process-invocation
Data Synchronization Best Practices:
https://help.salesforce.com/s/articleView?id=sf.integration_data_synchronization.htm
A developer has been tasked by the integration architect to build a solution based on the Streaming API. The developer has done some research and has found there are different implementations of the events in Salesforce (Push Topic Events, Change Data Capture, Generic Streaming, Platform Events), but is unsure of to proceed with the implementation.The developer asks the system architect for some guidance. What should the architect consider when making the recommendation?
A.
Push Topic Event can define a custom payload.
B.
Change Data Capture does not have record access support.
C.
Change Data Capture can be published from Apex.
D.
Apex triggers can subscribe to Generic Events.
Change Data Capture can be published from Apex.
Explanation
The question centers on guiding a developer on the correct use of Streaming API events. The key differentiator among the options is which feature is true and impactful for making an architectural decision.
Let's evaluate each option:
A. Push Topic Event can define a custom payload.
This is incorrect. PushTopics are based on a SOQL query, and the payload is the result of that query. You cannot define a fully custom, free-form payload with a PushTopic. Platform Events are designed for that purpose.
B. Change Data Capture does not have record access support.
This is misleading and generally incorrect. Change Data Capture events honor the sharing and field-level security of the subscribing user. The event payload will only contain fields and records that the user is permitted to see. Therefore, it does have record access support.
C. Change Data Capture can be published from Apex.
This is correct. While Change Data Capture is primarily an automatic service that publishes events on standard object record changes (create, update, delete, undelete), you can also publish Change Data Capture-like events for standard objects programmatically using the EventBus.publish method in Apex. This is a powerful feature that allows for simulating or forcing change events, which is crucial for testing and certain replication scenarios.
D. Apex triggers can subscribe to Generic Events.
This is incorrect. Apex triggers cannot act as subscribers for any Streaming API event (PushTopic, Generic, Platform Event, or Change Data Capture). Subscribers are always external clients (using CometD), Lightning components, or, in the case of Platform Events and Change Data Capture, Process Builder, Flow, or Apex Triggers that are fired when the event is received. The key is that the trigger is on the event object itself (e.g., My_Event__e), not that the trigger "subscribes" to a generic channel.
Therefore, the architect should recommend based on the accurate and powerful feature that Change Data Capture events can be published programmatically, which is a critical piece of information for implementation and testing.
Key Concept
The key concept is understanding the capabilities, use cases, and limitations of the different Streaming API event types (PushTopics, Generic Events, Platform Events, and Change Data Capture). An Integration Architect must be able to select the right event-based mechanism based on requirements like event source (data change vs. business event), payload flexibility, and how the event is published and consumed.
Reference
This distinction is covered in the Salesforce documentation on "Choose an Event Type for Your Use Case." Specifically, the documentation for Change Data Capture states that while it's automatic, you can "publish change events for standard objects" using Apex. This is a defining characteristic that differentiates it from other automated data-centric events and is essential knowledge for an architect designing a solution.
A customer imports data from an external system into Salesforce using Bulk API. These jobs have batch sizes of 2000 and are run in parallel mode. The batc fails frequently with the error "Max CPU time exceeded". A smaller batch size will fix this error. Which two options should be considered when using a smaller batch size? Choose 2 answers
A.
Smaller batch size may cause record-locking errors.
B.
Smaller batch size may increase time required to execute bulk jobs.
C.
Smaller batch size may exceed the concurrent API request limits.
D.
Smaller batch size can trigger "Too many concurrent batches" error.
Smaller batch size may increase time required to execute bulk jobs.
Smaller batch size can trigger "Too many concurrent batches" error.
Explanation:
The job is failing with the error "Max CPU time exceeded", which often occurs when processing too many records per batch triggers complex automation (triggers, flows, validation rules, rollups, etc.).
Reducing the batch size helps distribute processing and avoid exceeding CPU limits—but it introduces other trade-offs.
Below is the impact analysis of using a smaller batch size:
✅ Correct Options
B. Smaller batch size may increase time required to execute bulk jobs.
With more batches required to process the same number of records, the total execution time increases.
More batches = more overhead for setup, API calls, commit operations.
D. Smaller batch size can trigger "Too many concurrent batches" error.
Bulk API allows a maximum of 100 batches queued or processing at once.
Reducing the batch size increases the batch count, which can exceed this limit and cause the error.
❌ Incorrect Options
A. Smaller batch size may cause record-locking errors.
Actually, the opposite is true: smaller batch sizes reduce record-locking issues because fewer records are processed together.
C. Smaller batch size may exceed the concurrent API request limits.
Bulk API operations consume fewer API calls because each job counts as a single call + minimal batch overhead.
Smaller batch sizes do not significantly affect API request limits in most cases.
✅ Final Answer:
B and D
Reference:
Salesforce Bulk API Limits & Best Practices
Introduction to Bulk API 2.0 and Bulk API
Record Locking & CPU Limits Considerations
About This Quick Reference
Northern Trail Outfitters (NTO) has recently changed their Corporate Security Guidelines. The guidelines require that all cloud applications pass through a secure firewall before accessing on-premise resources. NTO is evaluating middleware solutions to integrate cloud applications with on-premise resources and services. What are two considerations an Integration Architect should evaluate before choosing a middleware solution?
Choose 2 answers
A.
The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.
B.
An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.
C.
The middleware solution enforces the OAuth security protocol.
D.
The middleware solution is able to interface directly with databases via an ODBC connection string.
The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.
An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.
Explanation
The core requirement is to pass all cloud application traffic through a secure firewall before accessing on-premise resources. This is a classic perimeter security and network topology challenge that must be addressed by the middleware infrastructure.
B. An API gateway component is deployable behind a Demilitarized Zone (DMZ) or perimeter network.
Perimeter Security:
The DMZ (Demilitarized Zone) is the standard network segment placed between the internal, trusted network and the external, untrusted network (the internet/cloud). To satisfy the requirement of passing traffic through a secure firewall, the API Gateway (a core component of modern integration/middleware) that receives external requests must be strategically placed behind the external firewall in the DMZ. This allows for strict control, logging, and inspection of all inbound traffic before it ever reaches the internal resources.
A. The middleware solution is capable of establishing a secure API gateway between cloud applications and on-premise resources.
Centralized Control and Security:
The API Gateway is the component that enforces security policies, handles throttling, performs message transformation, and ensures a secure connection (like TLS/SSL) between the cloud application (Salesforce) and the on-premise services. The middleware solution must inherently include or support a robust API Gateway to meet the secure access requirement.
❌ Why the Other Options are Incorrect
C. The middleware solution enforces the OAuth security protocol.
Too Specific:
While OAuth is a great, common security protocol, the requirement only states secure firewall access. Many other secure methods like mutual TLS (mTLS), JWT validation, or Basic Auth over HTTPS might be used depending on the endpoint. OAuth is a capability the gateway should have, but the fundamental architectural evaluation must focus on the network placement (DMZ) and component (API Gateway).
D. The middleware solution is able to interface directly with databases via an ODBC connection string.
Architectural Anti-Pattern:
A best practice is to never expose databases directly to integration middleware. Integration should be done via services and APIs (e.g., REST, SOAP) that enforce business logic, security, and transactionality. Directly connecting to an on-premise database via ODBC or JDBC bypasses the security layer and is highly discouraged.
📚 Reference
This relates to the Integration Security and Network Topology topics of the Integration Architect exam:
Key Concept:
Hybrid Integration Architecture. This requires an integration component (often called an Agent, Runtime, or Gateway) to be deployed on-premise, typically within a DMZ, to act as a secure bridge between the cloud and the internal network.
DMZ:
The role of the Demilitarized Zone in protecting the private network while allowing controlled access to services from an untrusted network.
Which WSDL should an architect consider when creating an integration that might be used for more than one salesforce organization and different met
A.
Corporate WSDL
B.
Partner WSDL
C.
SOAP API WSDL
D.
Enterprise WSDL
Partner WSDL
Explanation
The key requirement in the question is an integration that "might be used for more than one salesforce organization and different metadata." This directly points to the need for a generic, dynamic WSDL that is not tied to the specific configuration (custom objects or fields) of a single Salesforce org.
Let's evaluate each option:
A. Corporate WSDL:
This is incorrect. The Corporate WSDL (also known as the Enterprise WSDL) is strongly-typed and specific to a single Salesforce organization. It includes all of that org's custom objects, fields, and settings in its structure. If the metadata changes, the WSDL must be re-generated and the client code recompiled. This makes it unsuitable for use across multiple, different orgs.
B. Partner WSDL:
This is correct. The Partner WSDL is a single, generic, loosely-typed WSDL that works for any Salesforce organization. It represents sObjects and fields as generic types (e.g., sObject and XmlElement), allowing the client application to discover the metadata of any org at runtime. This makes it the ideal choice for ISVs building packaged applications or for companies building a single integration to connect to multiple Salesforce orgs with different configurations.
C. SOAP API WSDL:
This is a distractor. "SOAP API WSDL" is a generic term that describes the API itself. In practice, when you generate a WSDL for the SOAP API in Setup, you are explicitly choosing between the Partner WSDL and the Enterprise WSDL. This option is not specific enough.
D. Enterprise WSDL:
This is incorrect and is simply another name for the Corporate WSDL (Option A). It has the same limitation of being tightly coupled to a single org's metadata.
Key Concept
The key concept is understanding the critical architectural choice between a strongly-typed WSDL (Enterprise) and a loosely-typed WSDL (Partner).
Enterprise WSDL:
Used for stable, point-to-point integrations with a single, known Salesforce org. It provides the benefit of compile-time type checking.
Partner WSDL:
Used for dynamic, multi-tenant integrations that must work across multiple Salesforce orgs with varying metadata. It requires more complex client-side code to handle the generic sObjects but offers ultimate flexibility.
Reference
This is a foundational topic for Salesforce integrations. The official Salesforce documentation, specifically the "Generate the Enterprise WSDL" and "Generate the Partner WSDL" pages, clearly distinguishes these two types. The Partner WSDL is explicitly described as the correct choice for "an independent software vendor (ISV) who is creating a client application for multiple organizations" because it is not affected by organization-specific metadata.
A company's security assessment noted vulnerabilities on the un managed packages in their Salesforce orgs, notably secrets that are easily accessible and in plain text, such as usernames, passwords, and OAuth tokens used in callouts from Salesforce. Which two persistence mechanisms should an integration architect require to be used to ensure that secrets are protected from deliberate or inadvertent exposure?
Choose 2 answers
A.
Encrypted Custom Fields
B.
Named Credentials
C.
Protected Custom Metadata Types
D.
Protected Custom Settings
Named Credentials
Protected Custom Metadata Types
Explanation:
During a security review, the company discovered hard-coded secrets (usernames, passwords, OAuth tokens) in unmanaged package components.
To prevent exposure of credentials in code, configuration, or metadata, Salesforce recommends secure storage mechanisms that encrypt or restrict visibility of secrets.
The solution should ensure:
No plain text credentials in org metadata
Restricted visibility to administrators only
Secure handling of authentication for outbound callouts
✅ Correct Options
B. Named Credentials
Best practice for protecting secrets used in callouts
Securely stores OAuth tokens, passwords, and authentication endpoints
Credentials are never exposed in plain text to developers or subscribers
Supports OAuth 2.0, AWS IAM, and External Credential Framework
Simplifies callouts: no need to handle tokens manually in Apex
C. Protected Custom Metadata Types
Data marked as protected is hidden from subscribers of unmanaged or managed packages
Only visible in the packaging org
Secure choice when deploying credentials via a managed package configuration
Can store configuration securely without exposing sensitive fields
❌ Incorrect Options
A. Encrypted Custom Fields
Only protects data at rest
Admins can still view decrypted values
Not intended for integration secrets or programmatic authentication
D. Protected Custom Settings
Although protected settings limit visibility to subscribers,
Salesforce recommends Custom Metadata Types over Custom Settings for secure configuration in packages
Custom Settings are being deprecated for new secure configuration use cases
✅ Final Answer:
B. Named Credentials
C. Protected Custom Metadata Types
Reference:
Salesforce Security Guide – Handling Secrets in Integrations
https://developer.salesforce.com/docs
Named Credentials Overview
Named Credentials as Callout Endpoints
Protected Custom Metadata for Managed Packaging
ISVforce Guide: Build and Distribute AppExchange Solutions
An organization needs to integrate Salesforce with an external system and is considering authentication options. The organization already has implemented SAML, using a thirdparty Identity Provider for integrations between other systems. Which use case can leverage the existing SAML integration to connect Salesforce with other internal systems?
A.
Make formula fields with HYPERLINK() to external web servers more secure.
B.
Make Apex SOAP outbound integrations to external web services more secure.
C.
A Make Apex REST outbound integrations to external web services more secure.
D.
Make an API inbound integration from an external Java client more secure.
Make an API inbound integration from an external Java client more secure.
Explanation
The question asks which use case can leverage an existing SAML integration (using a third-party Identity Provider) to connect Salesforce with internal systems.
The key context here is how the existing SAML setup can be reused for API authentication for a server-to-server or client-to-server integration:
SAML Assertion Flow for Inbound API Access:
Salesforce supports the SAML Assertion Flow (a variation of an OAuth flow) for inbound API integrations. In this scenario, the external system (the "external Java client") gets a SAML Assertion from the organization's central Identity Provider (the existing one). The Java client then sends this SAML Assertion to Salesforce's token endpoint to exchange it for an OAuth Access Token, which it then uses to call Salesforce APIs.
Leveraging Existing IDP:
This flow allows the external client to reuse the organization's existing SAML Identity Provider as the method of authentication to Salesforce, satisfying the requirement to leverage the existing infrastructure.
❌ Why the Other Options are Incorrect
A. Make formula fields with HYPERLINK() to external web servers more secure.
This typically involves Outbound SSO or linking to an external resource. While SAML can be used for Outbound SSO (where Salesforce acts as the Identity Provider), the question specifies the organization already has a third-party Identity Provider. For a simple HYPERLINK(), the primary need is to pass the authenticated user context to the external system, which is not what the SAML Assertion flow for API authentication is designed for.
B. Make Apex SOAP outbound integrations to external web services more secure.
C. Make Apex REST outbound integrations to external web services more secure.
These are Outbound integrations, where Salesforce calls an external system. For Apex callouts, the standard secure practice is using Named Credentials. Named Credentials simplify the callout process and often rely on protocols like OAuth 2.0 (e.g., JWT Bearer) or mTLS for authentication to the external endpoint. It is rare and unnecessarily complex to use a SAML Assertion from an external IDP for a Salesforce outbound callout, as Salesforce typically acts as the client, not the Service Provider validating a SAML assertion.
📚 Reference
Salesforce Integration Pattern: SAML Assertion Flow for API Access.
Concept: This flow allows a client application (the external Java client) that has already been authenticated by an external Identity Provider (the existing SAML IdP) to use the SAML assertion to securely gain access to the Salesforce API.
Source: Salesforce Help documentation on OAuth 2.0 flows, specifically the SAML Assertion Flow.
Universal Containers (UC) owns a variety of cloud-based applications, including Salesforce, alongside several on premise applications. The on-premise applications are protected behind a corporate network with limited outside access to external systems. UC would like to expose data from the on-premise applications to Salesforce for a more unified user experience. The data should be accessible from Salesforce in real-time. Which two actions should be recommended to fulfill this system requirement?
Choose 2 answers
A.
Develop an application in Heroku that connects to the on-premise database via an ODBC string and VPC connection.
B.
Develop custom APIs on the company's network that are invokable by Salesforce.
C.
Deploy MuleSoft to the on-premise network and design externally facing APIs to expose the data.
D.
Run a batch job with an ETL tool from an on-premise server to move data to Salesforce.
Develop custom APIs on the company's network that are invokable by Salesforce.
Deploy MuleSoft to the on-premise network and design externally facing APIs to expose the data.
Explanation
The core requirement is to expose data from on-premise applications to Salesforce in real-time. The on-premise systems are protected, so the solution must facilitate secure, external access without compromising the corporate network.
Let's evaluate why the correct answers work and why the others do not:
B. Develop custom APIs on the company's network that are invokable by Salesforce.
This is a foundational and correct approach. It involves creating custom REST or SOAP API endpoints within the corporate firewall that expose the specific data needed from the on-premise applications. Salesforce can then call these APIs directly (using Apex or a low-code tool) to retrieve data in real-time. This provides maximum control and is a direct implementation of the required integration pattern.
C. Deploy MuleSoft to the on-premise network and design externally facing APIs to expose the data.
This is also correct and is a more robust, enterprise-grade version of option B. MuleSoft, an integration platform owned by Salesforce, acts as an API-led connectivity layer. Deploying it on-premise allows it to connect securely to the backend systems. You then design and publish managed, well-documented APIs that Salesforce can call. This approach provides benefits like built-in security, transformation, orchestration, and reusability that pure custom code (option B) might lack.
Why the other options are incorrect:
A. Develop an application in Heroku that connects to the on-premise database via an ODBC string and VPC connection.
This is incorrect and architecturally risky. Directly connecting an external application to the production database via ODBC bypasses the application logic and data layer, which is a major security anti-pattern. It can lead to data corruption, performance issues, and schema lock-in. The correct approach is to integrate through APIs provided by the application, not directly with its database.
D. Run a batch job with an ETL tool from an on-premise server to move data to Salesforce.
This is incorrect because it violates the core requirement of real-time access. ETL (Extract, Transform, Load) jobs are batch-oriented; they run on a schedule (e.g., every hour or night). This would result in stale data within Salesforce, which does not fulfill the requirement for a "unified user experience" with current information.
Key Concept
The key concept tested here is the selection of the Real-Time Integration Pattern, specifically the API-led Connectivity approach for accessing data from secured, on-premise systems.
An Integration Architect must recognize that real-time access requires a synchronous, request-response mechanism. The solution must create a secure gateway (the API) that exposes the on-premise data without directly exposing the database itself, adhering to principles of loose coupling and security.
Reference
This aligns with the standard integration patterns documented by Salesforce and MuleSoft. The official Salesforce Integration Architecture documentation advocates for the "Virtual Data Integration" or "Direct Data Access" pattern for real-time scenarios to avoid data duplication and latency. Using MuleSoft as an integration layer is a canonical example of the API-led connectivity model, which is the recommended approach for creating a structured, reusable, and secure integration framework.
Only authorized users are allowed access to the EBS and the Enterprise DMS. Customers call Customer Support when they need clarification on their bills. Customer Support needs seamless access to customer billing information from the E and view generated bills from the DMS. Which three authorization and authentication needs should an integration consultant consider while integrating the DMS and ESB with Salesforce?
Choose 3 answers
A.
Users should be authorized to view information specific to the customer they are servicing without a need to search for customer.
B.
Identify options to maintain DMS and EBS authentication and authorization details in Salesforce.
C.
Consider Enterprise security needs for access to DMS and EBS.
D.
Consider options to migrate DMS and EBS into Salesforce.
E.
Users should be authenticated into DMS and EBS without having to enter username and password.
Users should be authorized to view information specific to the customer they are servicing without a need to search for customer.
Consider Enterprise security needs for access to DMS and EBS.
Users should be authenticated into DMS and EBS without having to enter username and password.
Explanation
This scenario focuses on providing seamless and secure contextual access to external systems (EBS for billing data and DMS for documents) for Customer Support agents who are already logged into Salesforce.
E. Users should be authenticated into DMS and EBS without having to enter username and password. (Single Sign-On - SSO)
Need:
The goal is a seamless experience. Agents shouldn't have to log in multiple times.
Solution:
The integration must employ a Single Sign-On (SSO) mechanism (like OAuth 2.0 with JWT Bearer or SAML) to delegate the Salesforce session to the external systems (DMS/EBS) or the intermediary integration layer. This eliminates the need for separate credentials for the external systems, improving agent efficiency and user experience.
A. Users should be authorized to view information specific to the customer they are servicing without a need to search for customer. (Contextual Authorization)
Need:
The data displayed must be specific to the customer the agent is currently viewing in Salesforce.
Solution:
The integration call must transmit the contextual data (e.g., the Customer ID from the Salesforce record) to the EBS/DMS. The external system or an integration layer must then authorize the request, ensuring the user (via the SSO token) has permission to see only the requested customer's data and not other customer data. This ensures row-level security for the external data.
C. Consider Enterprise security needs for access to DMS and EBS. (Security Compliance)
Need:
The prompt states "Only authorized users are allowed access to the EBS and the Enterprise DMS," and these are core enterprise systems.
Solution:
The integration design must adhere to the organization's governance, audit, and security policies (e.g., encryption of data in transit/at rest, logging, monitoring, and compliance with internal security standards). Ignoring overall enterprise security standards for convenience is an architectural anti-pattern.
❌ Why the Other Options are Incorrect
B. Identify options to maintain DMS and EBS authentication and authorization details in Salesforce.
Anti-Pattern:
Storing external system usernames and passwords directly in Salesforce is a security vulnerability and violates the principle of Single Sign-On. While Salesforce's Named Credentials securely store authentication details (like OAuth tokens or a service principal's password), the option phrasing suggests storing per-user credentials, which is what SSO is designed to avoid. The focus should be on secure delegation, not maintenance of redundant credentials.
D. Consider options to migrate DMS and EBS into Salesforce.
Scope Mismatch:
The task is to design an integration solution. Migrating entire enterprise systems (Billing and Document Management) is a massive, long-term project and is out of scope for an integration consultant focused on accessing the current systems. Integration patterns (like Data Virtualization via Salesforce Connect or Remote Process Invocation via API callouts) are used precisely to avoid unnecessary data migration.
Key Concepts:
Single Sign-On (SSO): User experience and security via delegated authentication (Option E).
Contextual Integration: Passing relevant record data (like Customer ID) to authorize and retrieve specific information (Option A).
Enterprise Security and Governance: Adherence to corporate security policies for sensitive systems (Option C).
Salesforce Tools for this Pattern: Named Credentials (for securely managing the connection endpoint and authentication) combined with Per-User Authentication (to ensure the call uses the individual agent's authorization) is the recommended implementation pattern for achieving Options A and E.
A healthcare services company maintains a Patient Prescriptions System that has 50+ million records in a secure database. Their customer base and data set growing rapidly.
They want to make sure that the following policies are enforced:
1. Identifiable patient prescriptions must exist only in their secure system's databaseand encrypted at rest.
2. Identifiable patient prescriptions must be made available only to people explicit authorized in the Patient Prescriptions System assigned nurses anddoctors, patient, and people explicitly the patient may authorize.
3. Must be available only to verified and pre-approved people or legal entities.
To enable this, the company provides the following capabilities:
1. One-time use identity tokens for patients, nurses, doctors, and other people that expire within a few minutes.
2. Certificates for legal entities.
. RESTful services.
The company has a Salesforce Community Cloud portal for patients, nurses, doctors, and other authorized people. A limited number of employees analyze de identified data in Einstein Analytics.
Which two capabilities should the integration architect require for the Community Cloud portal and Einstein Analytics?
Choose 2 answers
A.
Identity token data storage
B.
Bulk load for Einstein Analytics
C.
Callouts to RESTful services
D.
Encryption in transit and at rest
Callouts to RESTful services
Encryption in transit and at rest
Explanation
The scenario demands strict PHI protection, fine-grained access, and compliance (e.g., HIPAA). The integration must never store identifiable data in Salesforce and must secure all interactions.
C. Callouts to RESTful services
Correct
Community users (patients, nurses, doctors) must access prescriptions on-demand via real-time REST callouts from Salesforce to the secure system.
Use one-time identity tokens (passed in headers) for ephemeral, authorized access.
No data is stored in Salesforce — only temporary display in UI (e.g., Lightning component).
Enables policy #2 and #3: Only verified, token-holding users get data.
D. Encryption in transit and at rest
Correct
In transit: All REST callouts must use HTTPS/TLS 1.2+.
At rest: Even though identifiable data is not stored, any cached responses or session data in Salesforce must be encrypted (Salesforce provides this by default).
Einstein Analytics: De-identified data is loaded — but encryption in transit (via secure API) is still required.
Meets policy #1 (no identifiable data at rest in Salesforce) and compliance standards.
Why A and B are incorrect
A. Identity token data storageWrong — Tokens are one-time, short-lived (minutes).
Storing them violates security best practices and policy #1.
Tokens must be used immediately in callouts and discarded.
B. Bulk load for Einstein AnalyticsWrong — Bulk load is for de-identified, analytical data only (allowed).
But the question is about Community portal + Einstein access to prescriptions — not bulk analytics.
Bulk load does not address authorization or encryption for live prescription access.
Official References
Salesforce Help:Callouts from Lightning Components
HIPAA Guidance: Salesforce supports encryption in transit/at rest — but PHI must not be stored.
https://compliance.salesforce.com/en/hipaa
Architect Guide: Secure External Service Integration
Exam Tip:
For PHI + external secure system:
Never store PII/PHI → Use real-time callouts (C)
Always encrypt → TLS + no at-rest PHI (D)
Northern Trail Outfitters' (NTO) Salesforce org usually goes through 8k-10k batches a day to synch data from external sources. NTO's Integration Architec has received requirements for a new custom object, FooBarc, for which 90M records will need to be loaded into the org. Once complete, 20GB (about 30M records) needs to be extracted to an external auditing system. What should the architect recommend using to meet these requirements in a day?
A.
Insert using Bulk API 2.0 and query using REST API.
B.
Insert and query using Bulk API 1.0.
C.
Insert using Bulk API 1.0 and query using REST API.
D.
Insert and query using Bulk API 2.0.
Insert and query using Bulk API 2.0.
Explanation
The scenario presents an extreme data volume challenge: loading 90 million records and then extracting 30 million records, all within a single day. The key to solving this is selecting an API designed for maximum throughput and asynchronous processing of massive datasets.
Let's evaluate why Bulk API 2.0 is the correct choice for both operations and why the other options are unsuitable:
Inserting 90 Million Records:
The Bulk API 2.0 is specifically engineered for this exact purpose. It processes large data sets asynchronously in the background, dividing the work into manageable batches that are processed in parallel. This prevents timeouts and efficiently handles the immense volume, which would be impossible for the synchronous REST API.
Querying 30 Million Records:
For data extraction of this magnitude, the Bulk API is again the only viable tool. The standard REST API has a hard limit of returning a maximum of 2,000 records in a single query response. While you can use pagination, it is not designed or efficient for extracting millions of records. The Bulk Query capability of Bulk API 2.0 allows you to execute a large query and receive the results (all 30 million records) as a set of downloadable files, which is precisely what the external auditing system requires.
Why the other options are incorrect:
A. Insert using Bulk API 2.0 and query using REST API:
This is incorrect because using the REST API to query 30 million records is not feasible due to its 2,000-record retrieval limit per call, making the process incredibly slow and complex.
B. Insert and query using Bulk API 1.0:
While Bulk API 1.0 is designed for high volume, it is a legacy API. Salesforce recommends using Bulk API 2.0 for all new development because it is more efficient and simpler to use, as it automatically manages batch sizing and job handling.
C. Insert using Bulk API 1.0 and query using REST API:
This is the least effective option. It combines the legacy version of the Bulk API for insertion with the entirely wrong tool (REST API) for the massive query operation.
Key Concept
The key concept tested here is the strategic selection of APIs based on data volume and processing.
An Integration Architect must understand the fundamental distinction between:
Synchronous APIs (like REST API): Best for real-time, interactive operations involving a few hundred or thousand records.
Asynchronous Bulk APIs (like Bulk API 2.0): Essential for non-real-time, high-throughput data loading and extraction jobs involving millions of records.
Choosing the wrong API pattern for the volume will lead to guaranteed failure, hitting governor limits and being unable to complete the job within the required timeframe.
Reference
This recommendation is based on the official Salesforce API documentation. The Bulk API Developer Guide explicitly states that Bulk API 2.0 "is optimized for loading or deleting large sets of data" and can be used to "query large sets of data." It is the designated tool for scenarios where the data volume is measured in millions of records, ensuring the job can be completed within a day.
Customer is evaluating Platform Events solution and would like help in comparing/contrasting it with Outbound Message for a real-time / near-real time needs. They expect 3,000 consumers of messages from Salesforce. Which three considerations should be evaluated and highlighted when deciding between the solutions?
Choose 3 answers
A.
Both Platform Events and Outbound Message offer declarative means for asynchronous near-real time needs. They aren't best suited for realtime integrations.
B.
In both Platform Events and Outbound Messages, the event messages are retried by and delivered in sequence, and only once. Salesforce ensures there is no duplicate message delivery.
C.
Message sequence is possible in Outbound Message but not guaranteed with Platform Events. Both offer very high reliability. Fault handling and recovery are fully handled by Salesforce.
D.
Number of concurrent subscribers to Platform Events is capped at 2,000. An Outbound Message configuration can pass only 100 notifications in a single messages to a SOAP end point.
E.
Both Platform Events and Outbound Message are highly scalable. However, unlike Outbound Message, only Platform Events have Event Delivery and Event Publishing limits to be considered.
Both Platform Events and Outbound Message offer declarative means for asynchronous near-real time needs. They aren't best suited for realtime integrations.
Number of concurrent subscribers to Platform Events is capped at 2,000. An Outbound Message configuration can pass only 100 notifications in a single messages to a SOAP end point.
Both Platform Events and Outbound Message are highly scalable. However, unlike Outbound Message, only Platform Events have Event Delivery and Event Publishing limits to be considered.
Explanation:
The customer wants to compare Platform Events vs Outbound Messages for real-time / near-real-time integration, and expects 3,000 subscribers.
Key decision factors include scalability, reliability guarantees, delivery behavior, and platform limits.
Let’s evaluate each option:
✅ Correct Options
A. Both Platform Events and Outbound Message offer declarative means for asynchronous near-real time needs.
They aren't best suited for realtime integrations.
Correct: both are asynchronous, near-real-time solutions
Not recommended when strict sub-second response requirements exist
Both work well for event-driven notifications
D. Number of concurrent subscribers to Platform Events is capped at 2,000.
An Outbound Message configuration can pass only 100 notifications in a single messages to a SOAP end point.
Platform Events have enforced delivery & subscriber concurrency limits
Outbound Message batching = max 100 notifications per SOAP call
With 3,000 consumers, PE scalability limits become important
E. Both Platform Events and Outbound Message are highly scalable.
However, unlike Outbound Message, only Platform Events have Event Delivery and Event Publishing limits to be considered.
Platform Events require evaluating publish, delivery, and retention limits
OM does not have publishing limits but is limited to SOAP and endpoint reliability
PE provides more flexibility and modern architecture, but with quotas
❌ Incorrect Options
B. Incorrect — Salesforce does not guarantee once-only delivery or sequencing for either solution
Outbound Messages can be retried and may produce duplicates
Platform Events are at-least-once delivery, also possible duplicates
C. Incorrect — message ordering is not guaranteed for Outbound Messages or Platform Events
“Very high reliability” and fully Salesforce-handled fault recovery is not accurate
✅ Final Answer:
A, D, and E
Reference:
Salesforce Platform Events Limits & Considerations
Considerations for Defining and Publishing Platform Events
Outbound Messaging Limits
About SOAP API
| Page 3 out of 9 Pages |
| Previous |