Integration-Architect Practice Test Questions

106 Questions


Northern Trail Outfitters has recently experienced intermittent network outages in its call center. When network service resumes, Sales representatives have inadvertently created duplicate orders in the manufacturing system because the order was placed but the return acknowledgement was lost during the outage. Which solution should an architect recommend to avoid duplicate order booking?


A.

Use Outbound Messaging to ensure manufacturing acknowledges receipt of order.


B.

Use scheduled apex to query manufacturing system for potential duplicate or missing orders.


C.

Implement idempotent design and have Sales Representatives retry order(s) in
question.


D.

Have scheduled Apex resubmit orders that do not have a successful response.





C.
  

Implement idempotent design and have Sales Representatives retry order(s) in
question.



Explanation:

When a network drop causes a lost acknowledgement, retries can inadvertently create a second order unless your integration is idempotent (“at most once” semantics). By assigning each order a unique message ID (or idempotency key) that your manufacturing system tracks, repeated submissions with the same key are recognized and ignored. Salesforce and many REST best-practices guides recommend this idempotent receiver pattern to guarantee single delivery even if clients retry. This allows reps to simply retry without fear of duplicates, and it builds robust fault-tolerance without custom polling or batch-reconciliation jobs.

An integration architect needs to build a solution that will be using the Streaming API, but the data loss should be minimized, even when the client re-connects every couple of days. Which two types of Streaming API events should be considered? Choose 2 answers


A.

Generic Events


B.

Change Data Capture Events


C.

PushTopic Events


D.

High Volume Platform Events





B.
  

Change Data Capture Events



D.
  

High Volume Platform Events



Explanation:

To minimize data loss across days-long disconnects, you need durable, high-retention channels. Change Data Capture (CDC) and High-Volume Platform Events are both implemented on the High-Volume Streaming API, offering a 72-hour retention window and replay-ID-based durable subscriptions. In contrast, Generic or PushTopic events (standard-volume) expire after 24 hours and have lower throughput. Choosing CDC and high-volume Platform Events ensures that even if a client reconnects infrequently, it can reliably replay missed changes without data loss.

An Integration Developer is developing an HR synchronization app for a client. The app synchronizes Salesforce record data changes with an HR system that's external to Salesforce. What should the integration architect recommend to ensure notifications are stored for up to three days if data replication fails?


A.

Change Data Capture


B.

Generic Events


C.

Platform Events


D.

Callouts





C.
  

Platform Events



Explanation:

Of all event types, High-Volume Platform Events provide the longest built-in retention—up to 72 hours—so subscribers can reconnect within three days and still retrieve missed messages. Change Data Capture events are implemented as high-volume PEs behind the scenes, but the exam choice is explicitly “Platform Events.” Generic and standard-volume events only persist for 24 hours, and Apex callouts can’t buffer for days. By modeling your notifications as high-volume Platform Events, you get guaranteed at-least-once delivery with multi-day replay support.

Northern Trail Outfitters needs to send order and line items directly to an existing finance application webservice when an order if fulfilled. It is critical that each order reach the finance application exactly once for accurate invoicing. What solution should an architect propose?


A.

Trigger invokes Queueable Apex method, with custom error handling process.


B.

Trigger makes @future Apex method, with custom error handling process.


C.

Button press invokes synchronous callout, with user handling retries in case of error


D.

Outbound Messaging, which will automatically handle error retries to the service.





D.
  

Outbound Messaging, which will automatically handle error retries to the service.



Explanation:

Salesforce Outbound Messaging delivers a SOAP message to your finance endpoint and automatically retries on failure for up to 24 hours, giving you built-in delivery guarantees without custom Apex. Each message contains its own unique notification ID, and the service must acknowledge receipt with a 2xx HTTP response; otherwise, Salesforce queues and retries until it succeeds or drops after the retry window. This “fire-and-forget” pattern offloads retry logic to the platform and gives you near-exactly-once semantics as long as your endpoint is idempotent and acknowledges duplicates idempotently.

A US business-to-consumer (B2C) company is planning to expand to Latin America. They project an initial Latin American customer base of about one million, and a growth rate of around 10% every year for the next 5 years. They anticipate privacy and data protection requirements similar to those in the European Union to come into effect during this time. Their initial analysis indicates that key personal data is stored in the following systems:

1. Legacy mainframe systems that have remained untouched for years and are due to be decommissioned.
2. Salesforce Commerce Cloud Service Cloud, Marketing Cloud, and Community Cloud.
3. The company's CIO tasked the integration architect with ensuring that they can completely delete their Latin American customer's personal data on demand.

Which three requirements should the integration architect consider?

Choose 3 answers


A.

Manual steps and procedures that may be necessary.


B.

Impact of deleted records on system functionality.


C.

Ability to delete personal data in every system.


D.

Feasibility to restore deleted records when needed.


E.

Ability to provide a 360-degree view of the customer.





B.
  

Impact of deleted records on system functionality.



C.
  

Ability to delete personal data in every system.



D.
  

Feasibility to restore deleted records when needed.



Explanation:

Under GDPR‐style data protection laws, “erasure” isn’t just a one-click delete—it requires careful coordination across every data store and backup to ensure compliance and operational continuity. First, you must be able to delete personal data in every system (CRM, Commerce Cloud, ERP, legacy mainframes, backups) so that a deletion request truly removes the subject’s information everywhere. Second, you need to assess the impact of deleted records on system functionality—for example, will orphaned orders, service cases, or analytic summaries break if the customer record is purged? This assessment drives exception handling and fallback designs. Third, you must evaluate the feasibility to restore deleted records to recover from accidental erasures or to comply with other legal holds—this includes designing audit logs or isolated recovery copies that respect data-minimization while still enabling rollback when legitimately needed.

An Enterprise Customer is planning to implement Salesforce to support case management. Below, is their current system landscape diagram. Considering Salesforce capabilities, what should the Integration Architect evaluate when integrating Salesforce with the current system landscape?


A.

Integrating Salesforce with Order Management System, Email Management System and Case Management System.


B.

Integrating Salesforce with Order Management System, Data Warehouse and Case Management System.


C.

Integrating Salesforce with Data Warehouse, Order Management and Email Management System.


D.

Integrating Salesforce with Email Management System, Order Management System and Case Management System.





D.
  

Integrating Salesforce with Email Management System, Order Management System and Case Management System.



Explanation:

When Salesforce becomes the central case management platform, it must exchange data with:

→ Email Management System – to capture inbound customer emails as cases and push outbound responses back into users’ mailboxes.
→ Order Management System – so agents can reference order history, shipment details, and billing context when resolving order-related cases.
→ Existing Case Management System – to migrate legacy case records or synchronize ongoing cases, ensuring seamless continuity and archival access.

Other landscape elements like data warehouses are downstream analytics targets rather than part of transactional case workflows. Order Management and Email are mission-critical for day-to-day support operations, while the legacy Case Management system holds the historical data that agents still need. Choosing these three ensures you address both the operational inputs (emails, orders) and the data migration/synchronization requirements for cases.

Which two requirements should the Salesforce Community Cloud support for selfregistration and SSO?
Choose 2 answers


A.

SAML SSO and Registration Handler


B.

OpenId Connect Authentication Provider and Registration Handler


C.

SAML SSO and just-in-time provisioning


D.

OpenId Connect Authentication Provider and just-in-time provisioning





C.
  

SAML SSO and just-in-time provisioning



D.
  

OpenId Connect Authentication Provider and just-in-time provisioning



Explanation:

To provide instant community access on first login, Salesforce must auto-provision users when they authenticate via SSO.

→ SAML SSO + JIT: You configure a SAML identity provider and enable Just-in-Time provisioning so that Salesforce consumes assertion attributes (e.g., Federation ID) to create the user, contact, and profile in one transaction.
→ OpenID Connect + JIT: You set up an OpenID Connect Authentication Provider in Setup and implement a Registration Handler class (Auth.RegistrationHandler) that Salesforce invokes on login, using the ID token claims to spin up the user record automatically.

Without JIT, you’d force users through a manual registration flow or pre-provisioning process, delaying access and complicating self-registration.

Universal Containers is a global financial company that sells financial products and services. There is a daily scheduled Batch Apex job that generates invoice from a given set of orders. UC requested building a resilient integration for this batch apex job in case the invoice generation fails. What should an integration architect recommend to fulfill the requirement?


A.

Build Batch Retry & Error Handling in the Batch Apex Job itself.


B.

Batch Retry & Error Handling report to monitor the error handling.


C.

Build Batch Retry & Error Handling using BatchApexErrorEvent.


D.

Build Batch Retry & Error Handling in the middleware.





C.
  

Build Batch Retry & Error Handling using BatchApexErrorEvent.



Explanation:

Salesforce’s BatchApexErrorEvent is a built-in platform event that fires whenever a batch Apex job fails or throws an unhandled exception. By subscribing to this event—either in Apex (trigger on the event) or via middleware—you can automatically detect failures in your daily invoice generation and implement retry logic or alerting without embedding complex error-handling inside the batch itself. This decouples your business logic from the resilience framework and leverages Salesforce’s event-driven model. A pure “in-job” retry loop risks hitting governor limits or masking systemic issues, and a simple report can’t react in real time. Events give you both visibility and automation around failure recovery for robust, scalable batch processing.

An Architect is asked to build a solution that allows a service to access Salesforce through the API. What is the first thing the Architect should do?


A.

Create a new user with System Administrator profile.


B.

Authenticate the integration using existing Single Sign-On.


C.

Authenticate the integration using existing Network-BasedSecurity.


D.

Create a special user solely for the integration purposes.





D.
  

Create a special user solely for the integration purposes.



Explanation:

The architect should first create a dedicated integration user (D) rather than using an existing admin user (A) or relying on SSO (B) or network security (C). A dedicated integration user follows the principle of least privilege, ensuring the service has only the necessary permissions. This approach improves security (reduced attack surface), auditability (clear separation of integration activities), and stability (avoids disruptions from credential changes). While SSO or network-based authentication might supplement this, they aren't substitutes for a properly scoped integration user. Salesforce best practices explicitly recommend dedicated integration users for API access to avoid coupling integrations with human user accounts.

A company's cloud-based single page application consolidates data local to the application with data from on premise and 3rd party systems. The diagram below typifies the application's combined use of synchronous and asynchronous calls. The company wants to use the average response time of its application's user interface as a basis for certain alerts. For this purpose, the following occurs:
1. Log every call's start and finish date and time to a central analytics data store.
2. Compute response time uniformly as the difference between the start and finish date and time — A to H in the diagram.
Which computation represents the end-to-end response time from the user's perspective?


A.

Sum of A to H


B.

Sum of A to F


C.

Sum of A, G, and H


D.

Sum of A and H





D.
  

Sum of A and H



Explanation:

The user-perceived response time is the delta between the initial request (A) and the final UI update (H). Steps B–G represent backend asynchronous processes (e.g., parallel API calls to on-premise/3rd-party systems) that don't block the UI. While these steps contribute to data freshness, they don't affect the user's perception of responsiveness. The diagram implies A and H are the only synchronous touchpoints from the user's perspective. This aligns with frontend performance monitoring principles, where "time to first render" (A) and "time to final interaction" (H) are critical metrics.

Northern Trail Outfitters (NTO) use Salesforce to track leads, opportunities, and to capture order details. However, Salesforce isn't the system that holds or processes orders. After the order details are captured in Salesforce, an order must be created in the remote system, which manages the orders lifecylce. The Integration Architect for the project is recommending a remote system that will subscribe to the platform event defined in Salesforce. Which integration pattern should be used for this business use case?


A.

Remote Call In


B.

Request and Reply


C.

Fire and Forget


D.

Batch Data Synchronization





C.
  

Fire and Forget



Explanation:

When Salesforce publishes a Platform Event and a remote system subscribes to it, the communication follows a Fire and Forget pattern. Salesforce emits the event without waiting for a response from the subscriber. This decouples systems and supports scalability, but also means there's no delivery guarantee or acknowledgment within the platform. It's suitable for event-driven architectures where real-time responsiveness is desired, and the receiving system is responsible for error handling and retries.

Northern Trail Outfitters (NTO) uses different shipping services for each of the 34 countries it serves. Services are added and removed frequently to optimize shipping times and costs. Sales Representatives serve all NTO customers globally and need to select between valid service(s) for the customer's country and request shipping estimates from that service. Which two solutions should an architect propose?
Choose 2 answers


A.

Use Platform Events to construct and publish shipper-specific events.


B.

Invoke middleware service to retrieve valid shipping methods.


C.

Use middleware to abstract the call to the specific shipping services.


D.

Store shipping services in a picklist that is dependent on a country picklist.





B.
  

Invoke middleware service to retrieve valid shipping methods.



C.
  

Use middleware to abstract the call to the specific shipping services.



Explanation:

Since services vary frequently across countries, hardcoding options (like picklists) isn't scalable. Middleware offers a flexible, centralized abstraction layer that hides the complexity of integrating with multiple shipping providers. It can dynamically return the available options based on country and invoke appropriate services without requiring Salesforce to manage service-specific logic. Platform Events are not suited for this synchronous UI-based interaction, and picklists lack the dynamism needed.


Page 2 out of 9 Pages
Previous