Salesforce-MuleSoft-Platform-Architect Practice Test Questions

152 Questions


A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?


A. Use a CloudHub autoscaling policy to add CloudHub workers


B. Use a CloudHub autoscaling policy to increase the size of the CloudHub worker


C. Increase the size of the CloudHub worker(s)


D. Increase the number of CloudHub workers





C.
  Increase the size of the CloudHub worker(s)

Explanation

The issue is an "out-of-file-space" error on a single CloudHub worker due to the JDBC driver creating temporary disk files. Increasing the worker size (e.g., from 0.1 vCores to 1 vCore) provides more disk space, directly addressing the resource constraint.

Why not A or D?
Adding more workers (horizontal scaling) doesn’t solve the disk space issue, as each worker has the same limited disk capacity.
Why not B?
CloudHub autoscaling cannot dynamically increase worker size; it only adds/removes workers.

Reference:
MuleSoft Documentation on CloudHub worker sizing.

What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?


A. Redis distributed cache


B. java.util.WeakHashMap


C. Persistent Object Store


D. File-based storage





C.
  Persistent Object Store

Explanation:

In MuleSoft’s Anypoint Platform, the Persistent Object Store is the most performant and reliable out-of-the-box solution for tracking transaction state in asynchronous, long-running processes — especially when deployed across multiple CloudHub workers.

Here’s why it stands out:
🧠 Persistence across restarts and redeployments: Unlike in-memory solutions, the Persistent Object Store retains data even if the app crashes or restarts.
🌐 Worker-safe: It’s designed to work across multiple CloudHub workers, ensuring consistent state management in distributed environments.
⚙️ Optimized for Mule runtime: It’s tightly integrated with Mule’s architecture and supports TTL (time-to-live), automatic cleanup, and key-based retrieval.
📦 No external setup required: Unlike Redis or custom file-based solutions, it’s available out-of-the-box with minimal configuration.

❌ Why the Other Options Are Less Suitable:
A. Redis distributed cache
Requires external setup and isn’t native to Anypoint Platform. Adds complexity and latency.
B. java.util.WeakHashMap
In-memory only and not thread-safe across workers. Data is lost on restart.
D. File-based storage
Not scalable or reliable in CloudHub. Disk space is limited and not shared across workers.

🔗 Reference:
MuleSoft Docs – Object Store v2
MuleSoft Certified Platform Architect – Topic 2 Quiz

An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?


A. When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state


B. When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state


C. When there is one deployment of the API implementation to CloudHub and anottV deployment to a customer-hosted Mule runtime that must share the cache state


D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state





D.
  When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state

Explanation:

The CloudHub Object Store is designed to provide persistence for data that needs to be shared across multiple workers within a single CloudHub application deployment.

D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state:
This is the ideal use case for the CloudHub Object Store. In CloudHub, workers within a single application are not clustered in the traditional sense, so they don't share in-memory cache. By using the persistent Object Store (Object Store V2), any worker that updates the "Quote of the Day" cache will make that updated value immediately available to all other workers in the same application deployment, ensuring a consistent cache state.

A. When there are three CloudHub deployments... to three separate CloudHub regions...:
The CloudHub Object Store is regional. This means an application's object store is only available within the region where the application is deployed. Sharing the cache state across different regions would require a different, more complex mechanism, possibly involving the Object Store REST API or an external database.

B. When there are two CloudHub deployments... by two Anypoint Platform business groups...:
Object stores are isolated per application deployment. Deployments in different business groups, even if in the same region, cannot share an object store using the standard connector. They would require the use of the Object Store REST API with proper permissions for cross-business group access.

C. When there is one deployment... to CloudHub and another deployment to a customer-hosted Mule runtime...:
A CloudHub deployment cannot directly share its persistent Object Store with a customer-hosted (on-premise) Mule runtime using the connector. The on-premise runtime would need to use the Object Store REST API, or a different shared cache solution would be required entirely.

A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios. What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?


A. Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry


B. Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers


C. Create API Notebooks and include them in the relevant Anypoint Exchange entries


D. Make relevant APIs discoverable via an Anypoint Exchange entry





C.
  Create API Notebooks and include them in the relevant Anypoint Exchange entries

Explanation

In Anypoint Exchange you can add API Notebooks that mix prose with executable JavaScript code blocks. Consumers can tweak the code and click Play to invoke real endpoints—ideal for scenario-driven, multi-API walkthroughs.

Eliminate others:
A. Mocking services help try endpoints before implementation, but they don’t provide runnable client code tutorials across scenarios.
B. API Console/Exchange docs are great for spec and try-it, but not for executable code notebooks.
D. Discoverability alone doesn’t deliver code-centric, runnable documentation. (You still need Notebooks.)

References:
Documenting an Asset Using API Notebook (create/run code blocks in Exchange).
Documenting an API (Exchange supports API Notebooks for interactive experimentation).
Exchange portal examples showing runnable API Notebook pages.
MuleSoft Developer Portal overview mentioning runnable code samples in API Notebook.

In an organization, the InfoSec team is investigating Anypoint Platform related data traffic. From where does most of the data available to Anypoint Platform for monitoring and alerting originate?


A. From the Mule runtime or the API implementation, depending on the deployment model


B. From various components of Anypoint Platform, such as the Shared Load Balancer, VPC, and Mule runtimes


C. From the Mule runtime or the API Manager, depending on the type of data


D. From the Mule runtime irrespective of the deployment model





D.
  From the Mule runtime irrespective of the deployment model

Explanation

Most of the data used by Anypoint Platform for monitoring and alerting — including metrics, logs, and event traces — originates from the Mule runtime itself, regardless of whether the application is deployed to:
CloudHub
Runtime Fabric
On-premises servers
Hybrid environments

The Mule runtime is responsible for:
📊 Emitting performance metrics (CPU, memory, throughput)
📁 Generating logs and error traces
📡 Sending operational data to Anypoint Monitoring, Runtime Manager, and API Manager

This design ensures consistent observability across deployment models. Even when APIs are managed via API Manager or routed through a Shared Load Balancer, the core telemetry still comes from the Mule runtime.

❌ Why the Other Options Are Incorrect:
A Suggests conditional origin based on deployment model, which is misleading — Mule runtime is always the source.
B While other components (e.g., VPC, Load Balancer) may contribute metadata, they are not the primary source of monitoring data.
C API Manager provides policy enforcement and analytics, but runtime-level metrics still come from Mule runtime.

🔗 Reference:
MuleSoft Docs – Anypoint Monitoring Overview
MuleSoft Certified Platform Architect-Level 1 Practice

A Mule application exposes an HTTPS endpoint and is deployed to three CloudHub workers that do not use static IP addresses. The Mule application expects a high volume of client requests in short time periods. What is the most cost-effective infrastructure component that should be used to serve the high volume of client requests?


A. A customer-hosted load balancer


B. The CloudHub shared load balancer


C. An API proxy


D. Runtime Manager autoscaling





B.
  The CloudHub shared load balancer

Explanation

Cost-effectiveness:
The CloudHub shared load balancer is included with your CloudHub subscription at no additional cost for basic functionality. Other options, like a Dedicated Load Balancer or customer-hosted solution, would incur significant extra costs.
Built-in load balancing:
When you deploy an application to more than one CloudHub worker, the shared load balancer automatically distributes incoming traffic using a round-robin algorithm. Since the application is already deployed to three workers, this built-in capability is the most direct and economical way to handle high request volumes.
HTTPS support:
The shared load balancer supports HTTPS endpoints. It includes a shared SSL certificate, so no custom certificate is required.
No static IP dependency:
The shared load balancer uses DNS to route traffic to the workers and does not require static IP addresses, which aligns with the application's deployment configuration.

Why the other options are incorrect
A. A customer-hosted load balancer:
This would be significantly more expensive due to infrastructure, setup, and maintenance costs. The lack of static IPs for the CloudHub workers also makes a custom-hosted load balancer challenging to configure.
C. An API proxy:
While an API proxy can provide caching, security, and traffic management, it is primarily a component managed within API Manager for governance, not a high-volume load-balancing solution by itself. It also typically requires a load balancer in front of it.
D. Runtime Manager autoscaling:
Autoscaling is for dynamically scaling the number of workers up or down based on load. While it's a good tool for managing variable loads, it is not a direct load-balancing component and has additional licensing requirements. Since the application is already on three workers, the immediate need is for an efficient, cost-effective way to distribute the high volume of requests, which is the function of the shared load balancer.

What best explains the use of auto-discovery in API implementations?


A. It makes API Manager aware of API implementations and hence enables it to enforce policies


B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform


C. It enables Anypoint Exchange to discover assets and makes them available for reuse


D. It enables Anypoint Analytics to gain insight into the usage of APIs





A.
  It makes API Manager aware of API implementations and hence enables it to enforce policies

Explanation:

In the implementation you add the API’s auto-discovery configuration (with the API ID). When the app starts, the Mule runtime registers with API Manager, so the platform can push/enforce policies (e.g., rate limiting, OAuth, CORS), control access via contracts, and collect usage telemetry. The essence/purpose is to let API Manager manage the live implementation.

Eliminate others:
B. Studio discovering platform APIs — not what auto-discovery does.
C. Exchange asset discovery — Exchange publishing is separate; auto-discovery doesn’t publish or “make reusable” assets.
D. Analytics insight — usage data collection happens as a consequence of being managed in API Manager, but analytics alone is not the purpose; policy/governance is.

When must an API implementation be deployed to an Anypoint VPC?


A. When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance


B. When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access


C. When the API implementation must be deployed to a production AWS VPC using the Mule Maven plugin


D. When the API Implementation must write to a persistent Object Store





B.
  When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access

Explanation:

An API implementation must be deployed to an Anypoint Virtual Private Cloud (VPC) when it needs to be accessible within a subnet of a restricted customer-hosted network that does not allow public access. Anypoint VPC provides a private, isolated network environment in CloudHub, enabling secure connectivity to customer-hosted networks (e.g., via VPN or Transit Gateway) without exposing the API publicly. This is critical for scenarios where the API must operate within a restricted network, such as for internal systems or sensitive data.

Why not A?
Invoking publicly exposed services outside CloudHub doesn’t require an Anypoint VPC, as Mule applications can make outbound calls over the public internet without a VPC.
Why not C?
Deploying to a production AWS VPC using the Mule Maven Plugin is not a requirement for Anypoint VPC; it refers to a deployment method, not a network necessity.
Why not D?
Writing to a persistent Object Store is a CloudHub feature available regardless of VPC usage and doesn’t mandate a VPC.

Reference:
MuleSoft Documentation on Anypoint VPC and CloudHub Networking Guide.

When designing an upstream API and its implementation, the development team has been advised to NOT set timeouts when invoking a downstream API, because that downstream API has no SLA that can be relied upon. This is the only downstream API dependency of that upstream API. Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?


A. An SLA for the upstream API CANNOT be provided


B. The invocation of the downstream API will run to completion without timing out


C. Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D


D. A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes






Explanation

Correct Answer: An SLA for the upstream API CANNOT be provided.

*****************************************

>> First thing first, the default HTTP response timeout for HTTP connector is 10000 ms (10 seconds). NOT 500 ms.

>> Mule runtime does NOT apply any such "load-dependent" timeouts. There is no such behavior currently in Mule.

>> As there is default 10000 ms time out for HTTP connector, we CANNOT always guarantee that the invocation of the downstream API will run to completion without timing out due to its unreliable SLA times. If the response time crosses 10 seconds then the request may time out.

The main impact due to this is that a proper SLA for the upstream API CANNOT be provided.

Reference: https://docs.mulesoft.com/http-connector/1.5/

http-documentation#parameters-3

An organization wants MuleSoft-hosted runtime plane features (such as HTTP load balancing, zero downtime, and horizontal and vertical scaling) in its Azure environment. What runtime plane minimizes the organization's effort to achieve these features?


A. Anypoint Runtime Fabric


B. Anypoint Platform for Pivotal Cloud Foundry


C. CloudHub


D. A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes





A.
  Anypoint Runtime Fabric

Explanation:

Explanation

Correct Answer: Anypoint Runtime Fabric

*****************************************

>> When a customer is already having an Azure environment, It is not at all an ideal approach to go with hybrid model having some Mule Runtimes hosted on Azure and some on MuleSoft. This is unnecessary and useless.

>> CloudHub is a Mulesoft-hosted Runtime plane and is on AWS. We cannot customize to point CloudHub to customer's Azure environment.

>> Anypoint Platform for Pivotal Cloud Foundry is specifically for infrastructure provided by Pivotal Cloud Foundry

>> Anypoint Runtime Fabric is right answer as it is a container service that automates the deployment and orchestration of Mule applications and API gateways. Runtime Fabric runs within a customer-managed infrastructure on AWS, Azure, virtual machines (VMs), and bare-metal servers.

-Some of the capabilities of Anypoint Runtime Fabric include:

-Isolation between applications by running a separate Mule runtime per application.

-Ability to run multiple versions of Mule runtime on the same set of resources.

-Scaling applications across multiple replicas.

-Automated application fail-over.

-Application management with Anypoint Runtime Manager.

Reference: https://docs.mulesoft.com/runtime-fabric/1.7/

Which of the following sequence is correct?


A. API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API


B. API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation


C. API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API


D. API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation





B.
  API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation

Explanation

Correct Answer:

API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation

*****************************************

>> API consumer does not implement any logic to invoke APIs. It is just a role. So, the option stating "API Consumer implementes logic to call an API" is INVALID.

>> API Implementation does not route any requests. It is a final piece of logic where functionality of target systems is exposed. So, the requests should be routed to the API implementation by some other entity. So, the options stating "API Implementation routes the request to >> API" is INVALID

>> The statements in one of the options are correct but sequence is wrong. The sequence is given as "API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation". Here, the statements in the options are VALID but sequence is WRONG.

>> Right option and sequence is the one where API consumer first requests access to API on Anypoint Exchange and obtains client credentials. API client then writes logic to call an API by using the access client credentials requested by API consumer and the requests will be routed to API implementation via the API which is managed by API Manager.

Version 3.0.1 of a REST API implementation represents time values in PST time using ISO 8601 hh:mm:ss format. The API implementation needs to be changed to instead represent time values in CEST time using ISO 8601 hh:mm:ss format. When following the semver.org semantic versioning specification, what version should be assigned to the updated API implementation?


A. 3.0.2


B. 4.0.0


C. 3.1.0


D. 3.0.1





B.
  4.0.0

Explanation

Correct Answer: 4.0.0

*****************************************

As per semver.org semantic versioning specification:

Given a version number MAJOR.MINOR.PATCH, increment the:

- MAJOR version when you make incompatible API changes.

- MINOR version when you add functionality in a backwards compatible manner.

- PATCH version when you make backwards compatible bug fixes.

As per the scenario given in the question, the API implementation is completely changing its behavior. Although the format of the time is still being maintained as hh:mm:ss and there is no change in schema w.r.t format, the API will start functioning different after this change as the times are going to come completely different.

Example: Before the change, say, time is going as 09:00:00 representing the PST. Now on, after the change, the same time will go as 18:00:00 as Central European Summer Time is 9 hours ahead of Pacific Time.

>> This may lead to some uncertain behavior on API clients depending on how they are handling the times in the API response. All the API clients need to be informed that the API functionality is going to change and will return in CEST format. So, this considered as a MAJOR change and the version of API for this new change would be 4.0.0


Page 2 out of 13 Pages
Previous