An Agentforce at Universal Containers is trying to set up a new Field Generation prompt template. They take the following steps.
1. Create a new Field Generation prompt template.
2. Choose Case as the object type.
3. Select the custom field AI Analysis c as the target field.
After creating the prompt template, the Agentforce Specialist saves, tests, and activates it.
Howsoever, when they go to a case record, the AI Analysis field does not show the (Sparkle) icon on the Edit pencil. When the Agentforce Specialist was editing the field, it was behaving as a normal field.
Which critical step did the Agentforce Specialist miss?
A. They forgot to reactivate the Lightning page layout for the Case object after activating their Field Generation prompt template.
B. They forgot that the Case Object is not supported for Add generation as Feinstein Service Replies should be used instead.
C. They forgot to edit the Lightning page layout and associate the field to a prompt template
Explanation
Let’s analyze why the sparkle icon (✨) isn’t appearing:
✅ Field Generation prompt templates work like this:
You create a prompt template and link it to:
. An object (e.g. Case)
. A specific field (e.g. AI_Analysis__c)
After you activate the prompt template, you must update the Lightning Record Page to connect:
. The field → to → the new prompt template.
This critical last step is:
Associating the field on the Lightning page layout to the Field Generation prompt template.
If you don’t do this:
The field remains a normal field with no ✨ icon.
The page has no knowledge that a prompt template is supposed to power that field.
The user can’t trigger AI generation for that field.
Hence, Option C is correct.
Why the other options are incorrect:
Option A (Reactivate Lightning page layout):
There’s no concept of “re-activating” the page layout.
You simply edit the layout to associate fields with prompt templates.
Option B (Case object not supported for Field Generation):
This is incorrect.
The Case object is fully supported for Field Generation prompts.
Einstein Service Replies are for chat responses, not for filling fields on a record.
Thus, the correct explanation is:
C. They forgot to edit the Lightning page layout and associate the field to a prompt template.
🔗 Reference
Salesforce Developer Docs — Field Generation Prompt Templates
Salesforce Blog — How to Enable Generative AI in Record Pages
Universal Containers implemented Agentforce for its users. One user complains that an Agent is not deleting activities from the past 7 days. What is the reason for this issue?
A. Agentforce does not have the permission to delete the user's records.
B. Agentforce Delete Record Action permission is not associated to the user.
C. Agentforce does not have a standard Delete Record action.
Explanation:
Agentforce, like other Salesforce generative AI agents, operates based on available agent actions. These actions define what the agent can and cannot do. By default, Agentforce does not include a
A. Agentforce does not have the permission to delete the user's records
❌ Incorrect – Permissions to delete records are governed by the user's profile or permission set, and the agent executes actions as itself, not as the user. So this isn’t the root cause of the issue.
B. Agentforce Delete Record Action permission is not associated to the user
❌ Incorrect/misleading – There is no specific "Delete Record Action permission" that needs to be associated to a user. Instead, the agent must be explicitly configured with a custom delete action, as deletion is not part of the out-of-the-box actions for Agentforce.
C. Agentforce does not have a standard Delete Record action
✅ Correct – The out-of-the-box Agentforce configuration does not include a delete action, for safety and governance reasons. If deletion is required, a custom action must be created and added to the agent's configuration explicitly.
Which use case is best supported by Salesforce Einstein Copilot's capabilities?
A. Bring together a conversational interface for interacting with AI for all Salesforce users, such as developers and ecommerce retailers.
B. Enable Salesforce admin users to create and train custom large language models (LLMs) using CRM data.
C. Enable data scientists to train predictive AI models with historical CRM data using built-in machine learning capabilities
Explanation
Let’s clarify what Einstein Copilot is designed to do:
✅ Einstein Copilot provides:
A conversational AI assistant embedded directly into Salesforce.
Natural language understanding to:
. Answer user questions.
. Retrieve CRM data.
. Take actions (e.g. create records, update fields).
. Generate content like emails, summaries, proposals.
Accessible across:
. Sales Cloud
. Service Cloud
. Experience Cloud
. And other Salesforce apps
Its purpose is to bring AI into day-to-day workflows for all Salesforce users—from sales reps to service agents to developers—via a unified chat interface. This is precisely what’s described in Option A.
Hence, Option A is correct.
Why the other options are incorrect:
Option B (Enable Salesforce admin users to create and train custom LLMs):
Salesforce admins cannot train their own LLMs within Copilot.
Einstein Copilot uses:
Salesforce-managed LLMs (e.g. proprietary Salesforce models)
External LLMs (e.g. OpenAI, Anthropic) via Model Builder
But Copilot itself doesn’t let admins “train custom LLMs.” Instead, they configure prompts and actions to interact with LLMs.
Option C (Enable data scientists to train predictive AI models):
That’s the domain of Einstein Prediction Builder or Model Builder, not Einstein Copilot.
Copilot is for conversational experiences and generative AI — not predictive model building by data scientists.
Therefore, the core use case best supported by Einstein Copilot is:
A. Bring together a conversational interface for interacting with AI for all Salesforce users, such as developers and ecommerce retailers.
🔗 Reference
Salesforce Help — What is Einstein Copilot?
Salesforce Blog — Meet Einstein Copilot: Your Conversational AI Assistant for CRM
Universal Containers wants support agents to use Agentforce to ask questions about its product tutorials and product guides.
What should the Agentforce Specialist do to meet this requirement?
A. Create a prompt template for product tutorials and guides.
B. Add an Answer Questions custom field in the product object for tutorial instructions.
C. Publish product tutorials and guides as Knowledge articles.
Explanation
Context of the Question Universal Containers (UC) wants its support agents to use Agentforce to ask questions about product tutorials and product guides. Agentforce typically references knowledge sources to provide accurate and contextual responses.
Why Knowledge Articles?
Centralized Repository: Publishing product tutorials and guides as Knowledge articles in Salesforce ensures that the information is readily available and searchable by Agentforce.
AI Integration: Salesforce’s AI solutions, including Agentforce, can often be configured to pull content directly from Salesforce Knowledge articles, giving users on-demand answers without manual data duplication.
Maintenance & Updates: Storing content in Salesforce Knowledge simplifies content updates, versioning, and user permissions.
Why Not the Other Options?
Option A (Create a Prompt Template): Creating a prompt template alone does not solve how the underlying content (tutorials, guides) is stored or accessed by Agentforce. Prompt templates shape the queries/responses but do not provide the knowledge base.
Option B (Add an Answer Questions Custom Field): A single field on the product object is insufficient for the depth of information found in tutorials and guides. It also lacks the robust search and user-friendly interface that Knowledge articles provide.
Conclusion To ensure Agentforce can effectively retrieve and deliver accurate information about products, publishing product tutorials and guides as Knowledge articles is the recommended approach.
Salesforce Agentforce Specialist References & Documents Salesforce Documentation: Set Up Salesforce Knowledge Discusses how to publish articles for easy access by AI-driven assistants and support teams.
Salesforce Agentforce Specialist Study Guide Explains best practices for feeding knowledge sources to generative AI and Agentforce.
In Model Playground, which hyperparameters of an existing Salesforce-enabled foundational model can An Agentforce change?
A. Temperature, Frequency Penalty, Presence Penalty
B. Temperature, Top-k sampling, Presence Penalty
C. Temperature, Frequency Penalty, Output Tokens
Explanation
Model Playground in Salesforce is a tool that allows you to experiment with prompts against supported foundational models (LLMs) such as:
Salesforce proprietary LLMs
Connected external LLMs via Model Builder (e.g. OpenAI, Anthropic)
When testing prompts in Model Playground, you can adjust inference hyperparameters to influence the behavior of the model’s output. The supported tunable hyperparameters typically include:
✅ Temperature
Controls randomness and creativity.
Higher values (e.g. 0.8) → more varied responses.
Lower values (e.g. 0.2) → more deterministic and focused.
✅ Frequency Penalty
Reduces repetition of the same tokens.
Discourages the model from repeating words or phrases.
✅ Presence Penalty
Encourages introducing new topics or words.
Discourages sticking strictly to previously used words.
These are the hyperparameters Salesforce exposes for tuning in Model Playground. Hence, Option A is correct.
Why the other options are incorrect:
Option B (Top-k sampling)
Salesforce’s UI for Model Playground does not expose Top-k as a configurable parameter (at least as of current releases).
Top-k is a sampling technique but not commonly surfaced in the Salesforce Playground UI.
Option C (Output Tokens)
While maximum token limits exist, the number of output tokens is typically controlled by system defaults or set indirectly via context length.
It’s not surfaced as a direct “hyperparameter” in the Playground’s hyperparameter controls.
Therefore, the correct hyperparameters you can change in Model Playground are:
A. Temperature, Frequency Penalty, Presence Penalty
🔗 Reference
Salesforce Help — Experiment with Prompt Templates in Model Playground
Salesforce Developer Docs — Hyperparameters for Prompt Templates
An Agentforce wants to include data from the response of external service invocation (REST API callout) into the prompt template.
How should the Agentforce Specialist meet this requirement?
A. Convert the JSON to an XML merge field.
B. Use External Service Record merge fields.
C. Use “Add Prompt Instructions” flow element.
Explanation:
To include external REST API response data in a Prompt Template, the External Service Record merge fields feature is the correct approach. Here’s why:
External Service Record Merge Fields
When you configure an External Service in Salesforce (via Named Credentials + OpenAPI spec), the response data can be directly referenced in Prompt Templates using merge fields like:
{{ExternalService.MyAPIResponse.Data}}
This dynamically injects the API response into the prompt without manual JSON parsing.
Why Not the Other Options?
A. Convert JSON to XML merge fields:
Unnecessary complexity. Salesforce supports native JSON parsing via External Services, eliminating the need for manual conversion.
C. "Add Prompt Instructions" flow element:
This is used to statically add instructions to a prompt (e.g., rules for AI behavior), not to inject dynamic API data.
Universal Containers (UC) plans to send one of three different emails to its customers based on the customer's lifetime value score and their market segment.
Considering that UC are required to explain why an e-mail was selected, which AI model should UC use to achieve this?
A. Predictive model and generative model
B. Generative model
C. Predictive model
Explanation
Let’s break this down:
✅ UC’s business need:
They want to choose among 3 emails for each customer.
The choice depends on:
. Lifetime value score (numeric prediction)
. Market segment (categorical attribute)
They must explain WHY a particular email was selected.
This scenario needs two capabilities working together:
1. Predictive Model
✅ A predictive model is required to:
Calculate the Customer Lifetime Value (CLV) score.
Possibly determine the likelihood of churn or purchase.
Classify customers into market segments if that’s also predicted dynamically.
This model produces structured outputs (scores, categories) that drive the logic for email selection.
2. Generative Model
✅ A generative model is required to:
Generate the explanatory text that tells the customer (or an internal user) why a specific email was chosen.
For example:
“This email was selected because the customer has a high lifetime value and belongs to the Premium Market Segment.”
Generative AI excels at natural language explanations. This satisfies UC’s requirement to explain why an email was selected.
Hence, UC needs both:
A predictive model → to calculate scores and drive the decision logic.
A generative model → to generate the explanation text.
Therefore, Option A is correct.
Why the other options are incorrect:
Option B (Generative model only)
A generative model alone can produce text but cannot generate numeric predictions like CLV scores.
You still need a predictive model to generate the data that drives the email selection logic.
Option C (Predictive model only)
A predictive model can produce scores or classifications but cannot generate explanations in natural language.
Thus, UC should:
A. Use a predictive model and a generative model.
🔗 Reference
Salesforce Help — Einstein Prediction Builder Overview
Salesforce Help — Prompt Builder Overview
How does Secure Data Retrieval ensure that only authorized users can access necessary Salesforce data for dynamic grounding?
A. Retrieves Salesforce data based on the 'Run As" users permissions.
B. Retrieves Salesforce data based on the user’s permissions executing the prompt.
C. Retrieves Salesforces data based on the Prompt template's object permissions.
Explanation:
Secure Data Retrieval is a Salesforce generative AI grounding feature that ensures any data used in prompts (such as via record snapshots or dynamic queries) is subject to normal Salesforce security controls, particularly object- and field-level security.
Why B is Correct:
When a prompt is executed in Agentforce, Secure Data Retrieval evaluates data access based on the permissions of the user executing the prompt. This means:
If the user doesn’t have access to a field or record, the AI will not see or use that data.
It ensures data privacy, compliance, and contextual relevance.
A. Retrieves Salesforce data based on the 'Run As" user's permissions
❌ Incorrect – While "Run As" can be used in flows or scheduled jobs, Secure Data Retrieval for prompts is tied to the user actively executing the prompt (e.g., the agent's context user or interacting end user), not a generic "Run As" setup.
C. Retrieves Salesforce data based on the Prompt Template’s object permissions
❌ Incorrect – Prompt templates themselves do not have object-level permissions. They define how data is used in prompts, but access is enforced per the executing user's permissions, not the template’s configuration.
Universal Containers (UC) has a mature Salesforce org with a lot of data in cases and Knowledge articles. UC is concerned that there are many legacy fields, with data that might not be applicable for Einstein AI to draft accurate email responses.
Which solution should UC use to ensure Einstein AI can draft responses from a defined data source?
A. Service AI Grounding
B. Work Summaries
C. Service Replies
Explanation:
To ensure Einstein AI drafts accurate email responses from trusted data sources (and ignores legacy fields), Universal Containers (UC) should:
Use Service AI Grounding
What it does: Restricts AI responses to specific, curated data sources, such as:
Approved Knowledge articles (filtered by relevance/date).
Selected case fields (e.g., excluding legacy fields).
Benefit: Prevents the AI from using outdated or irrelevant data in drafts.
Why Not the Other Options?
B. "Work Summaries":
Generates post-interaction summaries, not email drafts.
C. "Service Replies":
This is the output feature (drafting replies), but Service AI Grounding controls which data it uses.
Implementation Steps:
In Setup, enable Service AI Grounding.
Configure grounding to include only:
Current Knowledge articles (filter by validity dates).
Relevant case fields (e.g., Case.Description, but not legacy fields like Case.Legacy_Code__c).
Reference:
Salesforce Help - Service AI Grounding
Universal Containers (UC) wants to enable its sales reps to explore opportunities that are similar to previously won opportunities by entering the utterance, "Show me other opportunities like this one." How should UC achieve this in Einstein Copilot?
A. Use the standard Copilot action.
B. Create a custom Copilot action calling a flow.
C. Create a custom Copilot action calling an Apex class.
Explanation
UC wants to implement semantic similarity or “find similar records” functionality for Opportunities. Let’s see why Option B is correct:
Why a Custom Action is Needed
1. The utterance “Show me other opportunities like this one” implies:
Fetching opportunities based on similar attributes:
. Industry
. Deal size
. Products
. Close date range
. Win reasons
Possibly even vector similarity search if using embeddings for advanced matching.
2. There’s no standard Copilot action that automatically searches for “similar records.”
Out-of-the-box Copilot actions handle CRUD tasks, summaries, and basic lookups.
More complex logic like finding similar records requires custom logic.
✅ Therefore, a custom Copilot action is needed.
Why Call a Flow
✅ A custom Copilot action calling a Flow is the recommended pattern for:
1. Querying Salesforce data:
Using Get Records to find Opportunities matching similar criteria.
2. Handling business logic declaratively:
Compare fields like Stage, Amount, Products.
3. Returning results:
Passing record data back to the Copilot prompt workspace.
Advantages of using a Flow:
No code required.
Easy to maintain and adjust criteria.
Simple to expose as a Copilot action.
Hence, Option B is the best solution.
Why the other options are incorrect:
Option A (Use standard Copilot action):
No standard action provides “find similar records” logic.
A custom action is necessary for this specific use case.
Option C (Call an Apex class):
Apex could implement this logic.
However, Salesforce best practices recommend using Flows first wherever possible.
Apex should be used only if:
. The logic is too complex for Flow.
. Performance requirements demand custom code.
For most similarity searches, a Flow is sufficient and preferred.
Therefore, the correct approach for UC is:
B. Create a custom Copilot action calling a flow.
🔗 Reference
Salesforce Help — Build Custom Copilot Actions
Universal Containers needs a tool that can analyze voice and video call records to provide insights on competitor mentions, coaching opportunities, and other key information. The goal is to enhance the team's performance by identifying areas for improvement and competitive intelligence.
Which feature provides insights about competitor mentions and coaching opportunities?
A. Call Summaries
B. Einstein Sales Insights
C. Call Explorer
Explanation
UC wants:
Analysis of voice and video call records.
Insights into:
. Competitor mentions
. Coaching opportunities
. Other key call data
The goal is to improve sales performance and competitive awareness.
✅ Call Explorer is the correct feature for this use case because:
It’s part of Einstein Conversation Insights (ECI).
It allows users to:
1. Search and filter call recordings by specific keywords (like competitor names).
2. View metrics on how often certain terms (e.g. competitors, pricing discussions) are mentioned.
3. Identify calls that contain coaching moments, like objection handling or negotiation tactics.
4. Drill into calls for insights and analysis.
Call Explorer specifically surfaces:
Mentions of competitors, products, pricing, or custom keywords.
Trends across multiple calls for competitive intelligence.
Visual graphs showing how often topics occur across conversations.
Easy access to playback and transcript search for coaching purposes.
Hence, C. Call Explorer is the right answer.
Why the other options are incorrect:
A. Call Summaries
This feature provides a concise written summary of an individual call.
It does not provide:
. Competitive analysis across multiple calls.
. Trend analysis for coaching insights.
B. Einstein Sales Insights
This refers to predictive insights like forecasting, scoring, pipeline health.
It’s unrelated to call recording analysis or conversation intelligence.
Thus, for competitor mentions and coaching insights derived from voice and video calls, UC should use: C. Call Explorer
🔗 Reference
Salesforce Help — Einstein Conversation Insights Call Explorer
Salesforce Release Notes — Einstein Conversation Insights Features
A support team handles a high volume of chat interactions and needs a solution to provide quick, relevant responses to customer inquiries.
Responses must be grounded in the organization's knowledge base to maintain consistency and accuracy. Which feature in Einstein for Service should the support team use?
A. Einstein Service Replies
B. Einstein Reply Recommendations
C. Einstein Knowledge Recommendations
Explanation
The support team should use Einstein Reply Recommendations to provide quick, relevant responses to customer inquiries that are grounded in the organization’s knowledge base. This feature leverages AI to recommend accurate and consistent replies based on historical interactions and the knowledge stored in the system, ensuring that responses are aligned with organizational standards.
Einstein Service Replies(Option A) is focused on generating replies but doesn't have the same emphasis on grounding responses in the knowledge base.
Einstein Knowledge Recommendations(Option C) suggests knowledge articles to agents, which is more about assisting the agent in finding relevant articles than providing automated or AI-generated responses to customers.
Page 6 out of 17 Pages |
Previous |