Integration-Architect Practice Test Questions

106 Questions


Northern Trail Outfitters has had an increase in requests from other business units to integrate opportunity information with other systems from Salesforce. The developers have started writing asynchronous @future callouts directly into the target systems. The CIO is concerned about the viability of this approach scaling for future growth and has requested a solution recommendation. What should be done to mitigate the concerns that the CIO has?


A.

Implement an ETL tool and perform nightly batch data loads to reduce network traffic using last modified dates on the opportunity object to extract the right records.


B.

Develop a comprehensive catalog of Apex classes to eliminate the need for redundant code and use custom metadata to hold the endpoint information for each integration.


C.

Refactor the existing ©future methods to use Enhanced External Services, import Open API 2.0 schemas and update flows to use services instead of Apex.


D.

Implement an Enterprise Service Bus for service orchestration, mediation, routing and decouple dependencies across systems.





D.
  

Implement an Enterprise Service Bus for service orchestration, mediation, routing and decouple dependencies across systems.



Explanation:

An ESB provides a hub-and-spoke architecture that decouples Salesforce from every downstream system, centralizing routing, transformation, error handling, and service orchestration in middleware rather than in Apex. This prevents point-to-point @future callouts proliferating in each class, which become hard to manage, hard to monitor, and don’t scale as the number of integrations grows. With an ESB, you can throttle, queue, retry, and monitor each message flow, apply consistent security policies, and reuse shared adapters for protocol translation or data transformation. It also gives you a clear audit trail and SLA enforcement outside of Salesforce, offloading heavy processing from your org. In short, it addresses the CIO’s concerns about viability, governance, and scale much better than embedding future callouts or ETL jobs in Apex.

An Integration Architect has built a Salesforce application that integrates multiple systems and keeps them synchronized via Platform Events. What is taking place if events are only being published?


A.

The platform events are published immediately before the Apex transaction completes.


B.

The platform events are published after the Apex transaction completes.


C.

The platform events has a trigger in Apex.


D.

The platform events are being published from Apex.





B.
  

The platform events are published after the Apex transaction completes.



Explanation:

By default, platform events use the “Publish After Commit” behavior, meaning the event message is only enqueued once the transaction successfully commits. This guarantees subscribers see only committed data and prevents events from firing if the transaction rolls back. If you need subscribers to act on data created in that same transaction (e.g. new records), you must choose “Publish After Commit.” The alternative “Publish Immediately” mode can deliver events before or even if the transaction rolls back, which is not suitable when subscribers rely on committed state. Hence, when you see only publishes happening, you’re observing the post-commit enqueue.

An enterprise customer that has more than 10 Million customers has the following systems and conditions in their landscape:


A.

Enterprise Billing System (EBS) - All customer's monthly billing is generated by this system.


B.

Enterprise Document Management System (DMS) Bills mailed to customers are maintained in the Document Management system.


C.

Salesforce CRM (CRM)- Customer information, Sales and Support information is maintained in CRM.





A.
  

Enterprise Billing System (EBS) - All customer's monthly billing is generated by this system.



C.
  

Salesforce CRM (CRM)- Customer information, Sales and Support information is maintained in CRM.



Explanation:

Enterprise Billing System (EBS) must be integrated because:
It generates all customer billing, meaning financial data must sync accurately with Salesforce.
Ensures invoices reflect the latest customer updates (e.g., address changes, pricing agreements).
Critical for revenue tracking and compliance.

Salesforce CRM (CRM) is the system of record for customer data, meaning:
All sales, support, and customer profile updates originate here.
Must feed accurate customer data to EBS to prevent billing errors.
Enables customer service agents to view billing history (via integration) without leaving Salesforce.

Why not the Document Management System (DMS)?

While DMS stores bill copies, it’s a downstream system that can be updated via EBS (not a direct integration priority).
Bills are typically generated in EBS first, then archived in DMS—so EBS integration supersedes DMS.

Key Integration Needs:

Bidirectional sync between CRM (customer updates) and EBS (billing records).
Real-time API calls for billing triggers (e.g., contract changes) or batch syncs for large data volumes.
Error handling to reconcile discrepancies across 10M+ records.

This approach ensures billing accuracy while maintaining a single customer view in Salesforce.

Northern Trail Outfitters wants to improve the quality of call-outs from Salesforce to their REST APIs. For this purpose, they will require all API clients/consumers to adhere to RESTAPI Markup Language (RAML) specifications that include field-level definition of every API request and response payload. RAML specs serve as interface contracts that Apex REST API Clients can rely on. Which two design specifications should the Integration Architect include in the integration architecture to ensure that Apex REST API Clients unit tests confirm adherence to the RAML specs?

Choose 2 answers


A.

Call the Apex REST API Clients in a test context to get the mock response.


B.

Require the Apex REST API Clients to implement the HttpCalloutMock.


C.

Call the HttpCalloutMock implementation from the Apex REST API Clients.


D.

Implement HttpCalloutMock to return responses per RAML specification.





B.
  

Require the Apex REST API Clients to implement the HttpCalloutMock.



D.
  

Implement HttpCalloutMock to return responses per RAML specification.



Explanation:

Testing HTTP callouts in Apex requires mocking the responses so tests don’t perform real outbound traffic. By having each client class implement the HttpCalloutMock interface, you can define a mock respond() method that returns an HttpResponse built to exactly match your RAML-defined payloads (status code, headers, and JSON/XML body). In your unit tests you use Test.setMock(HttpCalloutMock.class, new YourMock()), ensuring every callout in test context returns a RAML-compliant stub. This guarantees your tests fail if the mock doesn’t conform, effectively validating adherence to the RAML contract.

A subscription-based media company's system landscape forces many subscribers to maintain multiple accounts and to login more than once. An Identity and Access Management (IAM) system, which supports SAML and OpenId, was recently implemented to improve their subscriber experience through self-registration and Single Sign-On (SSO). The IAM system must integrate with Salesforce to give new self-service customers instant access to Salesforce Community Cloud. Which two requirements should the Salesforce Community Cloud support for selfregistration and SSO? Choose 2 answers


A.

SAML SSO and Registration Handler


B.

OpenId Connect Authentication Provider and Registration Handler


C.

SAML SSO and just-in-time provisioning


D.

OpenId Connect Authentication Provider and just-in-time provisioning





C.
  

SAML SSO and just-in-time provisioning



D.
  

OpenId Connect Authentication Provider and just-in-time provisioning



Explanation:

To give new users instant Community access at first login, you must enable JIT provisioning so SAML or OIDC assertions automatically create the user account, contact, and profile in Salesforce. For a SAML provider, enable “Publish After Commit” JIT provisioning in Single Sign-On settings so the assertion’s attributes (e.g. Federation ID) drive account creation. For an OpenID Connect provider, configure it as an Auth. Provider in Setup, select the Registration Handler or let Salesforce auto-generate one, and enable JIT so the callback payload spins up the user. Without JIT, you’d need manual or registration-handler logic that still requires a separate registration step.

A large enterprise customer has decided to implement Salesforce as their CRM. The current system landscape includes the following:
1. An Enterprise Resource Planning (ERP) solution that is responsible for Customer Invoicing and Order fulfillment.
2. A Marketing solution they use for email campaigns.
The enterprise customer needs their sales and service associates to use Salesforce to view and log their interactions with customers and prospects in Salesforce. Which system should be the System of record for their customers and prospects?


A.

ERP with all prospect data from Marketing and Salesforce.


B.

Marketing with all customer data from Salesforce and ERP.


C.

Salesforce with relevant Marketing and ERP information.


D.

New Custom Database for Customers and Prospects.





C.
  

Salesforce with relevant Marketing and ERP information.



Explanation:

Since sales and service associates must log interactions in Salesforce, it should serve as the primary system of record for customer and prospect master data. Consumers of billing data from ERP and campaign data from Marketing Cloud should surface that information in Salesforce (via middleware or connectors), but not own the golden customer record there. Making Salesforce the authoritative CRM ensures a single, consistent view of customer status, activities, and support history, simplifying adoption, reporting, and process automation across departments. Other systems then become derived or analytical sinks, not the source of truth.

Northern Trail Outfitters uses a custom Java application to display code coverage and test results for all of their enterprise applications and is planning to include Salesforce as well. Which Salesforce API should an Integration Architect use to meet the requirement?


A.

SOAP API


B.

Analytics REST API


C.

Metadata API


D.

Tooling API





D.
  

Tooling API



Explanation:

The Salesforce Tooling API provides programmatic access to development metadata and diagnostic data—most notably Apex test results and code coverage metrics—without needing to run tests or deploy metadata via the Metadata API. The ApexCodeCoverage and ApexOrgWideCoverage objects expose coverage percentages and line-by-line results, which your custom Java application can query over REST or SOAP. This lets you fetch up-to-date coverage metrics and display them alongside your other enterprise test results. Neither the standard SOAP API (which doesn’t surface coverage), nor the Metadata API (which is for metadata deployment), nor the Analytics API (which surfaces reports and dashboards) expose these granular development artifacts. Only the Tooling API is designed for IDE and dev-ops integrations around tests and coverage.

Universal Containers (UC) uses Salesforce to track the following customer data:
1. Leads,
2. Contacts
3. Accounts
4. Cases

Salesforce is considered to be the system of record for the customer. In addition to Salesforce, customer data exists in an Enterprise Resource Planning (ERP) system, ticketing system, and enterprise data lake. Each of these additional systems have their own unique identifier. UC plans on using middleware to integrate Salesforce with the external systems. UC has a requirement to update the proper external system with record changes in Salesforce and vice versa. Which two solutions should an Integration Architect recommend to handle this requirement?

Choose 2 answers


A.

Locally cache external ID'S at the middleware layer and design business logic to map updates between systems.


B.

Store unique identifiers in an External ID field in Salesforce and use this to update the proper records across systems.


C.

Use Change Data Capture to update downstream systems accordingly when a record changes.


D.

Design an MDM solution that maps external ID's to the Salesforce record ID.





B.
  

Store unique identifiers in an External ID field in Salesforce and use this to update the proper records across systems.



C.
  

Use Change Data Capture to update downstream systems accordingly when a record changes.



Explanation:

A robust bi-directional integration strategy needs two components. First, an External ID field on each Salesforce object holds the corresponding record’s key from each external system. By marking that field as External ID, you can upsert by that value in both directions and ensure you target the correct record in each system . Second, Change Data Capture (CDC) events fire whenever records are created, updated, deleted, or undeleted in Salesforce. Subscribers—your middleware or downstream systems—can consume these events in near real time and push the changes back to the originating system. This combination guarantees accurate routing of updates and keeps all systems in sync without polling or batch jobs.

Northern Trail Outfitters (NTO) is looking to integrate three external systems that run nightly data enrichment processes in Salesforce. NTO has both of the following security and strict auditing requirements:
1. The external systems must follow the principle of least privilege, and
2. The activities of the eternal systems must be available for audit.
What should an Integration Architect recommend as a solution for these integrations?


A.

A shared integration user for the three external system integrations.


B.

A shared Connected App for the three external system integrations.


C.

A unique integration user for each external system integration.


D.

A Connected App for each external system integration.





D.
  

A Connected App for each external system integration.



Explanation:

Creating individual Connected Apps for each external system best meets both security and auditing requirements. This approach: 1) Enables the principle of least privilege by allowing separate permission sets for each integration, 2) Provides clear audit trails by distinguishing activities from different systems, and 3) Allows revocation of access per system if needed. Option A (shared integration user) violates least privilege and obscures audit trails. Option B (shared Connected App) similarly prevents distinguishing between systems. Option C (unique users) works but is less maintainable than OAuth-based Connected Apps. Salesforce security best practices recommend Connected Apps for system integrations because they: 1) Use OAuth for secure authentication, 2) Support IP restrictions and other security policies, and 3) Generate distinct audit entries in setup audit trails. Each Connected App can be configured with only the necessary API access scopes, implementing least privilege. The audit logs will clearly show which external system performed each action, fulfilling both security requirements while maintaining system accountability.

A customer's enterprise architect has identified requirements around caching, queuing, error handling, alerts, retries, event handling, etc. The company has asked the Salesforce integration architect to help fulfill such aspects with their Salesforce program. Which three recommendations should the Salesforce integration architect make? Choose 3 answers


A.

Transform a fire-and-forget mechanism to request-reply should be handled bymiddleware tools (like ETL/ESB) to improve performance.


B.

Provide true message queueing for integration scenarios (including
orchestration,process choreography, quality of service, etc.) given that a middleware solution is required.


C.

Message transformation and protocol translation should be done within Salesforce. Recommend leveraging Salesforce native protocol conversion capabilities as middle watools are NOT suited for such tasks


D.

Event handling processes such as writing to a log, sending an error or recovery process, or sending an extra message, can be assumed to be handled by middleware.


E.

Event handling in a publish/subscribe scenario, the middleware can be used to route requests or messages to active data-event subscribers from active data event publishers.





B.
  

Provide true message queueing for integration scenarios (including
orchestration,process choreography, quality of service, etc.) given that a middleware solution is required.



D.
  

Event handling processes such as writing to a log, sending an error or recovery process, or sending an extra message, can be assumed to be handled by middleware.



E.
  

Event handling in a publish/subscribe scenario, the middleware can be used to route requests or messages to active data-event subscribers from active data event publishers.



Explanation:

For complex integration requirements like caching, queuing, and error handling, middleware solutions are essential. Option B correctly identifies middleware's role in message queuing and orchestration - capabilities beyond Salesforce's native features. Option D acknowledges middleware's strength in comprehensive event handling like logging and error recovery, which would be cumbersome to build in Salesforce. Option E highlights middleware's publish/subscribe routing capabilities, crucial for decoupled architectures. Option A is incorrect because transforming to request-reply doesn't inherently improve performance and isn't always appropriate. Option C wrongly suggests protocol translation within Salesforce; middleware is actually better suited for this. Enterprise integration patterns demonstrate that middleware excels at: 1) Advanced queuing (guaranteed delivery, retries), 2) Complex event processing (filtering, routing), and 3) Cross-system monitoring. Salesforce's native capabilities focus on application-specific functionality, while middleware handles cross-cutting integration concerns. This separation of concerns aligns with the integration architect's role in designing solutions that leverage each platform's strengths while meeting enterprise requirements for reliability and observability.

Universal Containers (UC) is currently managing a custom monolithic web service that runs on an on-premise server. This monolithic web service is responsible for Point-to-Point (P2P) integrations between:
1. Salesforce and a legacy billing application
2. Salesforce and a cloud-based Enterprise Resource Planning application
3. Salesforce and a data lake.
UC has found that the tight interdependencies between systems is causing integrations to fail.
What should an architect recommend to decouple the systems and improve performance of the integrations?


A.

Re-write and optimize the current web service to be more efficient.


B.

Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.


C.

Use the Salesforce Bulk API when integrating back into Salesforce.


D.

Move the custom monolithic web service from on-premise to a cloud provider.





B.
  

Leverage modular design by breaking up the web service into smaller pieces for a microservice architecture.



Explanation:

A tightly coupled, monolithic web service becomes a single point of failure and performance bottleneck. By decomposing it into a set of independently deployable microservices—each handling one integration use-case—you achieve fault isolation, independent scaling, and shorter development cycles. Each microservice can own a bounded context (e.g., “Billing sync”, “ERP orders”, “Data lake bridge”) and publish events or expose APIs for the rest of the ecosystem. This aligns with modern architecture principles that maximize decoupling, resiliency, and team autonomy while improving performance over a single on-premise endpoint.

Northern Trail Outfitters needs to make synchronous callouts "available to promise" services to query product availability and reserve inventory during customer checkout process. Which two considerations should an integration architect make when building a scalable integration solution?
Choose 2 answers


A.

The typical and worst-case historical response times.


B.

The number batch jobs that can run concurrently.


C.

How many concurrent service calls are being placed.


D.

The maximum query cursors open per user on the service.





A.
  

The typical and worst-case historical response times.



C.
  

How many concurrent service calls are being placed.



Explanation:

When designing real-time “available to promise” (ATP) callouts, you must dimension both performance and scale against Salesforce’s own limits and your external service’s SLAs. First, measure your external system’s typical and worst-case response times, since Salesforce enforces a 120 sec callout timeout (and will block the user UI until it returns). Second, track concurrent callout volume: Salesforce limits you to 100 HTTP callouts per Apex transaction, and long-running synchronous transactions (over 5 sec CPU) count toward your org’s 10 concurrent long-running transaction limit . Knowing both metrics lets you decide if you need middleware, caching layers, or a sync patterns to avoid hitting timeouts and concurrency governors under load.


Page 1 out of 9 Pages