Salesforce-MuleSoft-Platform-Architect Practice Test Questions

152 Questions


A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?


A. Use a CloudHub autoscaling policy to add CloudHub workers


B. Use a CloudHub autoscaling policy to increase the size of the CloudHub worker


C. Increase the size of the CloudHub worker(s)


D. Increase the number of CloudHub workers





C.
  Increase the size of the CloudHub worker(s)

Explanation

The issue is an "out-of-file-space" error on a single CloudHub worker due to the JDBC driver creating temporary disk files. Increasing the worker size (e.g., from 0.1 vCores to 1 vCore) provides more disk space, directly addressing the resource constraint.

Why not A or D?
Adding more workers (horizontal scaling) doesn’t solve the disk space issue, as each worker has the same limited disk capacity.
Why not B?
CloudHub autoscaling cannot dynamically increase worker size; it only adds/removes workers.

Reference:
MuleSoft Documentation on CloudHub worker sizing.

What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?


A. Redis distributed cache


B. java.util.WeakHashMap


C. Persistent Object Store


D. File-based storage





C.
  Persistent Object Store

Explanation:

In MuleSoft’s Anypoint Platform, the Persistent Object Store is the most performant and reliable out-of-the-box solution for tracking transaction state in asynchronous, long-running processes — especially when deployed across multiple CloudHub workers.

Here’s why it stands out:
🧠 Persistence across restarts and redeployments: Unlike in-memory solutions, the Persistent Object Store retains data even if the app crashes or restarts.
🌐 Worker-safe: It’s designed to work across multiple CloudHub workers, ensuring consistent state management in distributed environments.
⚙️ Optimized for Mule runtime: It’s tightly integrated with Mule’s architecture and supports TTL (time-to-live), automatic cleanup, and key-based retrieval.
📦 No external setup required: Unlike Redis or custom file-based solutions, it’s available out-of-the-box with minimal configuration.

❌ Why the Other Options Are Less Suitable:
A. Redis distributed cache
Requires external setup and isn’t native to Anypoint Platform. Adds complexity and latency.
B. java.util.WeakHashMap
In-memory only and not thread-safe across workers. Data is lost on restart.
D. File-based storage
Not scalable or reliable in CloudHub. Disk space is limited and not shared across workers.

🔗 Reference:
MuleSoft Docs – Object Store v2
MuleSoft Certified Platform Architect – Topic 2 Quiz

An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?


A. When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state


B. When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state


C. When there is one deployment of the API implementation to CloudHub and anottV deployment to a customer-hosted Mule runtime that must share the cache state


D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state





D.
  When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state

Explanation:

The CloudHub Object Store is designed to provide persistence for data that needs to be shared across multiple workers within a single CloudHub application deployment.

D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state:
This is the ideal use case for the CloudHub Object Store. In CloudHub, workers within a single application are not clustered in the traditional sense, so they don't share in-memory cache. By using the persistent Object Store (Object Store V2), any worker that updates the "Quote of the Day" cache will make that updated value immediately available to all other workers in the same application deployment, ensuring a consistent cache state.

A. When there are three CloudHub deployments... to three separate CloudHub regions...:
The CloudHub Object Store is regional. This means an application's object store is only available within the region where the application is deployed. Sharing the cache state across different regions would require a different, more complex mechanism, possibly involving the Object Store REST API or an external database.

B. When there are two CloudHub deployments... by two Anypoint Platform business groups...:
Object stores are isolated per application deployment. Deployments in different business groups, even if in the same region, cannot share an object store using the standard connector. They would require the use of the Object Store REST API with proper permissions for cross-business group access.

C. When there is one deployment... to CloudHub and another deployment to a customer-hosted Mule runtime...:
A CloudHub deployment cannot directly share its persistent Object Store with a customer-hosted (on-premise) Mule runtime using the connector. The on-premise runtime would need to use the Object Store REST API, or a different shared cache solution would be required entirely.

A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios. What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?


A. Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry


B. Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers


C. Create API Notebooks and include them in the relevant Anypoint Exchange entries


D. Make relevant APIs discoverable via an Anypoint Exchange entry





C.
  Create API Notebooks and include them in the relevant Anypoint Exchange entries

Explanation

In Anypoint Exchange you can add API Notebooks that mix prose with executable JavaScript code blocks. Consumers can tweak the code and click Play to invoke real endpoints—ideal for scenario-driven, multi-API walkthroughs.

Eliminate others:
A. Mocking services help try endpoints before implementation, but they don’t provide runnable client code tutorials across scenarios.
B. API Console/Exchange docs are great for spec and try-it, but not for executable code notebooks.
D. Discoverability alone doesn’t deliver code-centric, runnable documentation. (You still need Notebooks.)

References:
Documenting an Asset Using API Notebook (create/run code blocks in Exchange).
Documenting an API (Exchange supports API Notebooks for interactive experimentation).
Exchange portal examples showing runnable API Notebook pages.
MuleSoft Developer Portal overview mentioning runnable code samples in API Notebook.

In an organization, the InfoSec team is investigating Anypoint Platform related data traffic. From where does most of the data available to Anypoint Platform for monitoring and alerting originate?


A. From the Mule runtime or the API implementation, depending on the deployment model


B. From various components of Anypoint Platform, such as the Shared Load Balancer, VPC, and Mule runtimes


C. From the Mule runtime or the API Manager, depending on the type of data


D. From the Mule runtime irrespective of the deployment model





D.
  From the Mule runtime irrespective of the deployment model

Explanation

Most of the data used by Anypoint Platform for monitoring and alerting — including metrics, logs, and event traces — originates from the Mule runtime itself, regardless of whether the application is deployed to:
CloudHub
Runtime Fabric
On-premises servers
Hybrid environments

The Mule runtime is responsible for:
📊 Emitting performance metrics (CPU, memory, throughput)
📁 Generating logs and error traces
📡 Sending operational data to Anypoint Monitoring, Runtime Manager, and API Manager

This design ensures consistent observability across deployment models. Even when APIs are managed via API Manager or routed through a Shared Load Balancer, the core telemetry still comes from the Mule runtime.

❌ Why the Other Options Are Incorrect:
A Suggests conditional origin based on deployment model, which is misleading — Mule runtime is always the source.
B While other components (e.g., VPC, Load Balancer) may contribute metadata, they are not the primary source of monitoring data.
C API Manager provides policy enforcement and analytics, but runtime-level metrics still come from Mule runtime.

🔗 Reference:
MuleSoft Docs – Anypoint Monitoring Overview
MuleSoft Certified Platform Architect-Level 1 Practice

A Mule application exposes an HTTPS endpoint and is deployed to three CloudHub workers that do not use static IP addresses. The Mule application expects a high volume of client requests in short time periods. What is the most cost-effective infrastructure component that should be used to serve the high volume of client requests?


A. A customer-hosted load balancer


B. The CloudHub shared load balancer


C. An API proxy


D. Runtime Manager autoscaling





B.
  The CloudHub shared load balancer

Explanation

Cost-effectiveness:
The CloudHub shared load balancer is included with your CloudHub subscription at no additional cost for basic functionality. Other options, like a Dedicated Load Balancer or customer-hosted solution, would incur significant extra costs.
Built-in load balancing:
When you deploy an application to more than one CloudHub worker, the shared load balancer automatically distributes incoming traffic using a round-robin algorithm. Since the application is already deployed to three workers, this built-in capability is the most direct and economical way to handle high request volumes.
HTTPS support:
The shared load balancer supports HTTPS endpoints. It includes a shared SSL certificate, so no custom certificate is required.
No static IP dependency:
The shared load balancer uses DNS to route traffic to the workers and does not require static IP addresses, which aligns with the application's deployment configuration.

Why the other options are incorrect
A. A customer-hosted load balancer:
This would be significantly more expensive due to infrastructure, setup, and maintenance costs. The lack of static IPs for the CloudHub workers also makes a custom-hosted load balancer challenging to configure.
C. An API proxy:
While an API proxy can provide caching, security, and traffic management, it is primarily a component managed within API Manager for governance, not a high-volume load-balancing solution by itself. It also typically requires a load balancer in front of it.
D. Runtime Manager autoscaling:
Autoscaling is for dynamically scaling the number of workers up or down based on load. While it's a good tool for managing variable loads, it is not a direct load-balancing component and has additional licensing requirements. Since the application is already on three workers, the immediate need is for an efficient, cost-effective way to distribute the high volume of requests, which is the function of the shared load balancer.

What best explains the use of auto-discovery in API implementations?


A. It makes API Manager aware of API implementations and hence enables it to enforce policies


B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform


C. It enables Anypoint Exchange to discover assets and makes them available for reuse


D. It enables Anypoint Analytics to gain insight into the usage of APIs





A.
  It makes API Manager aware of API implementations and hence enables it to enforce policies

Explanation:

In the implementation you add the API’s auto-discovery configuration (with the API ID). When the app starts, the Mule runtime registers with API Manager, so the platform can push/enforce policies (e.g., rate limiting, OAuth, CORS), control access via contracts, and collect usage telemetry. The essence/purpose is to let API Manager manage the live implementation.

Eliminate others:
B. Studio discovering platform APIs — not what auto-discovery does.
C. Exchange asset discovery — Exchange publishing is separate; auto-discovery doesn’t publish or “make reusable” assets.
D. Analytics insight — usage data collection happens as a consequence of being managed in API Manager, but analytics alone is not the purpose; policy/governance is.

When must an API implementation be deployed to an Anypoint VPC?


A. When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance


B. When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access


C. When the API implementation must be deployed to a production AWS VPC using the Mule Maven plugin


D. When the API Implementation must write to a persistent Object Store





B.
  When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access

Explanation:

An API implementation must be deployed to an Anypoint Virtual Private Cloud (VPC) when it needs to be accessible within a subnet of a restricted customer-hosted network that does not allow public access. Anypoint VPC provides a private, isolated network environment in CloudHub, enabling secure connectivity to customer-hosted networks (e.g., via VPN or Transit Gateway) without exposing the API publicly. This is critical for scenarios where the API must operate within a restricted network, such as for internal systems or sensitive data.

Why not A?
Invoking publicly exposed services outside CloudHub doesn’t require an Anypoint VPC, as Mule applications can make outbound calls over the public internet without a VPC.
Why not C?
Deploying to a production AWS VPC using the Mule Maven Plugin is not a requirement for Anypoint VPC; it refers to a deployment method, not a network necessity.
Why not D?
Writing to a persistent Object Store is a CloudHub feature available regardless of VPC usage and doesn’t mandate a VPC.

Reference:
MuleSoft Documentation on Anypoint VPC and CloudHub Networking Guide.

When designing an upstream API and its implementation, the development team has been advised to NOT set timeouts when invoking a downstream API, because that downstream API has no SLA that can be relied upon. This is the only downstream API dependency of that upstream API. Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?


A. An SLA for the upstream API CANNOT be provided


B. The invocation of the downstream API will run to completion without timing out


C. Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D


D. A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes





A.
   An SLA for the upstream API CANNOT be provided

Explanation:

Why A is correct
If your upstream API depends on one downstream API call to complete the request, then the upstream API’s end-to-end latency and reliability are bounded by that downstream dependency. When the downstream API has no SLA (no guaranteed latency/availability), the upstream team cannot credibly commit to an SLA for response time (and often not for “successful response availability” either), because a single slow or unresponsive downstream call can delay or prevent the upstream response.

Also, not setting a timeout means the request thread or flow can remain blocked waiting for the downstream response, which increases the risk of thread starvation and cascading performance issues, further undermining any SLA commitment.

MuleSoft’s HTTP Request behavior explicitly describes response timeout as the maximum time the request blocks the flow waiting for the HTTP response—that’s exactly the point: without a defined bound you can’t bound your API’s response behavior.

Why the other options are wrong

B. “The invocation … will run to completion without timing out” — Incorrect
Even if you don’t set a timeout, platforms typically have defaults. In MuleSoft, the HTTP Request operation uses a default response timeout from the Mule configuration when not explicitly set (commonly documented as 10,000 ms).
So “no timeout” doesn’t reliably mean “it will never time out,” and it definitely doesn’t guarantee completion.

C. Authentication comment (SAML/JWT) — Incorrect / irrelevant
This option is unrelated to downstream timeout or SLA design.

D. “A load-dependent timeout < 1000 ms … by the Mule runtime …” — Incorrect
There isn’t a Mule runtime behavior that applies a load-dependent sub-1000ms timeout by default. Mule timeouts are configuration-driven (connector, app, or gateway policy defaults), not “load-dependent < 1s” magic.

An organization wants MuleSoft-hosted runtime plane features (such as HTTP load balancing, zero downtime, and horizontal and vertical scaling) in its Azure environment. What runtime plane minimizes the organization's effort to achieve these features?


A. Anypoint Runtime Fabric


B. Anypoint Platform for Pivotal Cloud Foundry


C. CloudHub


D. A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes





A.
  Anypoint Runtime Fabric

Explanation:

The organization wants MuleSoft-hosted runtime plane features (HTTP load balancing, zero-downtime deployments, horizontal and vertical scaling) but running in their Azure environment.

CloudHub (option C, including CloudHub 2.0) is MuleSoft's fully managed iPaaS, providing all these features out-of-the-box with minimal effort. However, it runs on MuleSoft-hosted infrastructure (backed by AWS), not in the customer's Azure environment.

Anypoint Runtime Fabric (RTF) is a container-based runtime plane (using Docker/Kubernetes) that delivers the same MuleSoft-hosted-like features: built-in HTTP load balancing, zero-downtime redeployments, horizontal/vertical scaling, and high availability. It is installed and runs in the customer's own infrastructure, including Microsoft Azure (e.g., on Azure Kubernetes Service - AKS or VMs). This meets the requirement of running in Azure while providing the desired features with significantly less operational effort compared to manual setups.

RTF minimizes the organization's effort because it automates orchestration, scaling, and management via MuleSoft's tools, without requiring them to build these capabilities from scratch.

Why the other options are incorrect:

B. Anypoint Platform for Pivotal Cloud Foundry
This is an older integration for deploying Mule apps on Pivotal Cloud Foundry (PCF), a PaaS platform. PCF can run on Azure, but it is not a standard or current MuleSoft runtime plane option for achieving these features natively. It requires managing PCF itself and is largely deprecated in favor of RTF or CloudHub.

C. CloudHub
Provides the features with zero effort but is MuleSoft-hosted (on AWS), not in the customer's Azure environment.

D. A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes
Hybrid typically refers to combining CloudHub (MuleSoft-hosted) with customer-hosted standalone runtimes. Standalone customer-hosted runtimes require manual configuration for load balancing, scaling, and zero-downtime, increasing effort significantly.

Reference:
MuleSoft official documentation on deployment strategies and Runtime Fabric confirms RTF supports Azure deployment with built-in features like load balancing and scaling, while preserving centralized management from the Anypoint control plane.

Which of the following sequence is correct?


A. API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API


B. API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation


C. API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API


D. API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation





B.
  API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation

Explanation:

The process follows this logical order:

API Consumer requests access to API: An organization or developer (the API consumer) discovers an API in Anypoint Exchange and requests access. This usually involves obtaining client credentials (Client ID and Secret) to use the API.

API Client implements logic to call an API: The developer then incorporates the API call into their application's code (the API client). This involves programming the application to use the obtained credentials and send requests to the API's endpoint.

API routes the request to API Implementation: At runtime, the implemented API client makes a request. The API Gateway (the "API" in the sequence) intercepts this request, validates the credentials and applies policies, and then routes the traffic to the backend Mule application (the API implementation) that contains the business logic.

Why other options are incorrect:

A: This sequence is incorrect because the consumer must first request access and obtain credentials before the client can implement the logic to call the API.

C: This option swaps the roles of "Consumer" and "Client." The consumer is the entity (person/organization) requesting access, while the client is the software component making the actual programmatic call.

D: Similar to A, access must be granted before implementation can begin. Also, the roles are slightly jumbled at the end, as the API (Gateway/Proxy) routes the request to the implementation.

Version 3.0.1 of a REST API implementation represents time values in PST time using ISO 8601 hh:mm:ss format. The API implementation needs to be changed to instead represent time values in CEST time using ISO 8601 hh:mm:ss format. When following the semver.org semantic versioning specification, what version should be assigned to the updated API implementation?


A. 3.0.2


B. 4.0.0


C. 3.1.0


D. 3.0.1





B.
  4.0.0

Explanation:

The question asks which version number should be assigned when changing a time zone representation (PST to CEST) in an API implementation while maintaining the same ISO 8601 string format.

Correct Answer
Option B:
4.0.0 According to the semver.org specification (Semantic Versioning 2.0.0), a Major version increment (X.y.z) is required when you make incompatible API changes. Changing the time zone from PST to CEST is a breaking change because any existing consumer (client) of the API expects the data in PST. If the client logic performs calculations or displays information based on the assumption of PST, switching to CEST without warning will cause the client's application to provide incorrect data or fail. Since this change is not backward-compatible for the consumer, the Major version must be incremented.

Incorrect Answers

Option A: 3.0.2:
This would be a Patch version. Patches are reserved for backward-compatible bug fixes. Changing a data contract's time zone is not a simple fix; it alters the fundamental meaning of the data sent to the user.

Option C: 3.1.0:
This would be a Minor version. Minor versions are used when adding functionality in a backward-compatible manner. While you are changing the implementation, it is not "adding" a feature that keeps the old one intact; it is replacing the old behavior with a new, incompatible one.

Option D: 3.0.1:
This is the current version mentioned in the prompt. Reusing the same version number for a change in logic or contract is a violation of the immutability principle in versioning.

References
SemVer.org (Summary of Rules): "MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards compatible manner, and PATCH version when you make backwards compatible bug fixes."

MuleSoft Catalyst / API Lifecycle: When designing APIs in Anypoint Platform, any change to the API contract (RAML/OAS) or the expected data format that requires consumers to update their code is considered a Major change.

Salesforce/MuleSoft Exam Topic: This falls under Section 1: Explaining and Application of the Anypoint Platform and API Management (Versioning strategies).


Page 2 out of 13 Pages
Previous