C_AIG_2412 Practice Test Questions

63 Questions


How does the Al API support SAP AI scenarios? Note: There are 2 correct answers to this question.


A. By integrating Al services into business applications


B. By providing a unified framework for operating Al services


C. By integrating Al models into third-party platforms like AWS


D. By managing Kubernetes clusters automatically





A.
  By integrating Al services into business applications

B.
  By providing a unified framework for operating Al services

Explanation

Why the correct answers are right:

A. By integrating AI services into business applications
Correct. The SAP AI API (part of SAP AI Core and SAP AI Launchpad) is specifically designed to allow developers to integrate generative AI capabilities and AI services directly into SAP business applications, extensions, and custom solutions built on SAP Business Technology Platform (BTP). This enables embedding of LLMs and other AI models into business processes.

B. By providing a unified framework for operating AI services
Correct. The AI API provides a standardized, unified interface for managing the entire lifecycle of AI scenarios – including registering artifacts, creating configurations, executing workflows, deploying models, and monitoring inferences. It abstracts the underlying runtimes and offers a consistent way to operate AI services across different backends.

Why the incorrect answers are wrong:

C. By integrating AI models into third-party platforms like AWS
Incorrect. The SAP AI API does not push or integrate SAP-managed AI models into external third-party platforms (e.g., AWS Bedrock, Azure OpenAI). Instead, it allows SAP AI Core / Generative AI Hub to consume and use models hosted on those third-party hyperscalers within the SAP ecosystem. The integration flow is inbound (external models → SAP), not outbound.

D. By managing Kubernetes clusters automatically
Incorrect. Kubernetes cluster management (scaling, provisioning, node management, etc.) is handled automatically by the underlying infrastructure of SAP AI Core (which runs on managed Kubernetes with components like Argo Workflows and KServe). This is transparent to the developer and is not a responsibility or function of the AI API itself. The AI API operates at a higher abstraction level focused on AI workload lifecycle management.

Official References:

SAP AI Core Service Guide – AI API Overview: https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/ai-api

Generative AI Hub Documentation: https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/generative-ai-hub-in-sap-ai-core

SAP Learning Journey – Solving Business Problems using SAP's Generative AI Hub: https://learning.sap.com/learning-journeys/solving-business-problems-using-sap-s-generative-ai-hub

What is a part of LLM context optimization?


A. Reducing the model's size to improve efficiency


B. Adjusting the model's output format and style


C. Enhancing the computational speed of the model


D. Providing the model with domain-specific knowledge needed to solve a problem





D.
  Providing the model with domain-specific knowledge needed to solve a problem

Explanation:

Why it’s correct:

LLM context optimization is about feeding the model relevant information or context so it can generate accurate and useful responses. This often involves providing domain-specific knowledge, examples, or situational data that the model uses to reason correctly without retraining. It ensures outputs are precise and aligned with the problem at hand.

Why other options are wrong:

A. Reducing the model's size to improve efficiency
→ While shrinking a model (like pruning or quantization) can make it faster or less resource-intensive, it does not improve the model’s understanding of the task or the context it uses. This is purely model-level efficiency optimization.

B. Adjusting the model's output format and style
→ Changing the style, tone, or structure of outputs (e.g., making answers more formal, concise, or structured) is output-level tuning, not context optimization. The model still generates responses based on whatever context it already has.

C. Enhancing the computational speed of the model
→ Improving inference speed through hardware acceleration or software optimization is performance engineering, unrelated to providing better context or improving response relevance.

Reference:

SAP Learning Hub: Generative AI with SAP – LLM Context Optimization – see sections on supplying domain-specific context for LLMs.

What is the goal of prompt engineering?


A. To replace human decision-making with automated processes


B. To craft inputs that guide Al systems in generating desired outputs


C. To optimize hardware performance for Al computations


D. To develop new neural network architectures for Al models





B.
  To craft inputs that guide Al systems in generating desired outputs

Explanation

Why Option B is Correct:

Prompt engineering is the core skill for effectively using Large Language Models (LLMs) and AI assistants like me. The entire goal is to carefully design, structure, and refine the text input (the "prompt") you give to the AI system to steer it toward a more accurate, relevant, and useful response.

This includes:

Instruction Tuning: Giving clear and specific instructions.
Providing Context: Adding background information for the AI to reference.
Formatting Requests: Asking for outputs in a specific style or structure (e.g., a table, a summary, code).
Using Examples (Few-Shot Learning): Including examples in the prompt to demonstrate the desired task.

In the context of SAP Generative AI Hub, prompt engineering is a fundamental practice for customizing the interaction with foundational models to suit specific business use cases, such as generating product descriptions, summarizing customer feedback, or extracting data from documents.

Why the Other Options are Incorrect:

A. To replace human decision-making with automated processes:
This is incorrect. Prompt engineering is a collaborative tool that enhances human capabilities. The goal is to get better outputs from the AI to inform or support human decisions, not to replace the human. The human remains in control, crafting the prompt and evaluating the result.

C. To optimize hardware performance for AI computations:
This describes a different technical field, such as hardware engineering, systems optimization, or MLOps. Prompt engineering operates at the software interaction layer and does not involve hardware tuning.

D. To develop new neural network architectures for AI models:
This is the goal of AI researchers and machine learning engineers. It involves creating or modifying the underlying model structure (like GPT or BERT), which is a highly specialized, low-level task. Prompt engineering works with existing, pre-trained models to use them more effectively without changing their architecture.

🔗 Official SAP Reference

For authoritative information that aligns with this definition, you can refer to the official SAP documentation and learning resources:

SAP Help Portal - Generative AI Hub: The documentation discusses how to work with prompts in the context of the AI Launchpad and how to "customize interactions with foundational models," which is the practical application of prompt engineering. You can explore sections on creating prompts and scenarios.

SAP Learning Journey for C_AIG_2412: The official preparation materials for your exam emphasize the importance of "prompt engineering techniques" as a key skill for SAP Generative AI Developers.

What can be done once the training of a machine learning model has been completed in SAP AICore? Note: There are 2 correct answers to this question.


A. The model can be deployed in SAP HANA.


B. The model's accuracy can be optimized directly in SAP HANA.


C. The model can be deployed for inferencing.


D. The model can be registered in the hyperscaler object store.





C.
  The model can be deployed for inferencing.

D.
  The model can be registered in the hyperscaler object store.

Explanation:

Once a model training execution completes, SAP AI Core produces an Output Artifact. This artifact is persisted in your connected Object Store (hyperscaler) and acts as the input for a Deployment, which makes the model available for real-time inference.

Why Option C is correct:

After training, the model is essentially a static file (artifact). To make it "live," you create a Deployment. This creates a running instance (pod) in the AI Core infrastructure that exposes an API endpoint for applications to send data for predictions (inferencing).

Why Option D is correct:

SAP AI Core is built on a "bring your own storage" principle. It does not store the trained weights or binaries on its own local disk; instead, it writes the result back to your registered Hyperscaler Object Store (like AWS S3 or Azure Blob). It then "registers" this location in its internal metadata so you can reference it in later steps.

Why Option A is incorrect:

While SAP HANA can consume the results of an AI Core model via API calls, the model itself is not "deployed" into the HANA database. SAP AI Core models run in a containerized environment (Kubernetes-based), not inside the HANA engine.

Why Option B is incorrect:

Optimization of a model's accuracy (like fine-tuning or hyperparameter adjustment) is a function of the Training Pipeline within SAP AI Core. SAP HANA is used for data storage or vector search, not for the direct algorithmic optimization of a model's internal weights after training.

Official SAP References

SAP Help Portal: Artifacts in SAP AI Core - Describes how models are registered as artifacts.

SAP Help Portal: Inferencing (Deploying Models)) - Details the process of using a trained model for predictions.

SAP Help Portal: Register an Object Store Secret - Explains how AI Core connects to hyperscalers to store and retrieve models.

You want to assign urgency and sentiment categories to a large number of customer emails. You want to get a valid json string output for creating custom applications. You decide to develop a prompt for the same using generative Al hub.
What is the main purpose of the following code in this context?
prompt_test = """Your task is to extract and categorize messages. Here are some examples: {{?technique_examples}}
Use the examples when extract and categorize the following message: {{?input}}
Extract and return a json with the following keys and values:
-"urgency" as one of {{?urgency}}
-"sentiment" as one of {{?sentiment}}
"categories" list of the best matching support category tags from: {{?categories}}
Your complete message should be a valid json string that can be read directly and only contains the keys mentioned in t import random random.seed(42) k = 3
examples random. sample (dev_set, k) example_template = """ {example_input} examples '\n---\n'.join([example_template.format(example_input=example ["message"], example_output=json.dumps (example[ f_test = partial (send_request, prompt=prompt_test, technique_examples examples, **option_lists) response = f_test(input=mail["message"])


A. Generate random examples for language model training


B. Evaluate the performance of a language model using few-shot learning


C. Train a language model from scratch


D. Preprocess a dataset for machine learning





B.
  Evaluate the performance of a language model using few-shot learning

Explanation

Why the correct answer is right:

B. Evaluate the performance of a language model using few-shot learning
Correct. The code implements few-shot prompting by randomly selecting k=3 examples from a dev_set, inserting them into the prompt, and testing the model's response on a new input (customer email). This is a standard technique to evaluate generative AI performance without fine-tuning.

Why the incorrect answers are wrong:

A. Generate random examples for language model training
Incorrect. Examples are sampled from an existing dev_set, not generated. They are used for prompting/inference, not for training the model.

C. Train a language model from scratch
Incorrect. No training occurs; the code only sends inference requests to a pre-trained LLM via the Generative AI Hub.

D. Preprocess a dataset for machine learning
Incorrect. The code focuses on prompt construction and model invocation for categorization, not data cleaning or transformation.

Official References:

Generative AI Hub – Prompt Engineering: https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/prompt-engineering-in-generative-ai-hub

Learning Journey – Solving Business Problems using SAP's Generative AI Hub: https://learning.sap.com/learning-journeys/solving-business-problems-using-sap-s-generative-ai-hub

You want to extract useful information from customer emails to augment existing applications in your company. How can you use generative-ai-hub-sdk in this context?


A. Generate a new SAP application based on the mail data.


B. Generate JSON strings based on extracted information.


C. Generate random email content and send them to customers.


D. Train custom models based on the mail data.





B.
  Generate JSON strings based on extracted information.

Explanation:

Why it’s correct:

The generative-ai-hub-sdk allows you to process unstructured text, such as customer emails, and extract structured information. In this scenario, the SDK is used to generate valid JSON strings containing relevant data (like sentiment, urgency, or categories) that can be directly integrated into existing applications for automation or analytics.

Why other options are wrong:

A. Generate a new SAP application based on the mail data
→ While customer data is valuable, the SDK cannot create applications. Its function is to extract and structure information, not develop full SAP applications automatically.

C. Generate random email content and send them to customers
→ The SDK is focused on information extraction and structured output, not generating arbitrary emails for outreach. Random content generation could introduce errors or irrelevant information, which is not the goal here.

D. Train custom models based on the mail data
→ The SDK works with pre-trained models for inference and prompt-based generation. It does not perform model training or fine-tuning on new datasets directly.

Reference:

SAP Help Portal – Generative AI Hub SDK Guide – sections on structured output generation and JSON formatting.

How does SAP deal with vulnerability risks created by generative Al? Note: There are 2 correct answers to this question.


A. By implementing responsible Al use guidelines and strong product security standards.


B. By identifying human, technical, and exfiltration risks through an Al Security Taskforce.


C. By focusing on technological advancement only.


D. By relying on external vendors to manage security threats.





A.
  By implementing responsible Al use guidelines and strong product security standards.

B.
  By identifying human, technical, and exfiltration risks through an Al Security Taskforce.

Explanation:

The correct answers are A and B because SAP's official approach to managing generative AI vulnerabilities involves a comprehensive strategy that combines governance frameworks with active security measures.

Why A is correct:

SAP has established formal responsible AI guidelines through its Global AI Ethics Policy and operational principles like "Safety and Security." These are implemented through product security standards with technical controls built into services like the Generative AI Hub and AI Core, such as the Prompt Registry and Input/Output Filtering.

Why B is correct:

SAP proactively addresses AI security through specialized teams that systematically identify risks across human factors, technical vulnerabilities, and data exfiltration threats. This structured risk assessment is documented in SAP's security communications about their AI stack.

Why C is incorrect:

SAP does not focus on technological advancement only. Their approach explicitly integrates organizational governance, compliance reviews, and human oversight throughout the AI lifecycle, going beyond pure technology.

Why D is incorrect:

SAP does not rely on external vendors to manage security threats. While they use third-party models, SAP maintains primary responsibility for security through their own controls including data isolation, filtering mechanisms, and internal governance structures.

Official SAP References:

SAP's blog post "Mitigating Security Risks in Generative AI Using SAP's AI Stack" details technical controls and risk frameworks
SAP's Responsible AI page outlines ethics and security principles
SAP Learning course "Introducing Responsible AI at SAP" explains governance frameworks

What contract type does SAP offer for Al ecosystem partner solutions?


A. Annual subscription-only contracts


B. All-in-one contracts, with services that are contracted through SAP


C. Pay-as-you-go for each partner service


D. Bring Your Own License (BYOL) for embedded partner solutions





B.
  All-in-one contracts, with services that are contracted through SAP

Explanation:

SAP offers a unified commercial framework for AI ecosystem partner solutions where customers sign a single contract directly with SAP, and all partner services are contracted through SAP. This "all-in-one" model simplifies procurement, avoids multi-party negotiations, and provides a seamless experience for customers consuming integrated AI solutions. According to SAP's official learning materials, partner solutions are "branded and contracted through SAP," meaning SAP acts as the single point of contact for contracting. Recent partnership announcements, such as the SAP-Icertis collaboration, highlight "one-stop licensing" as a key benefit of this integrated approach.

Why Other Options Are Incorrect

A. Annual subscription-only contracts:
Incorrect because SAP offers various consumption models, including AI Units and bundled packages, not exclusively annual subscriptions for partner solutions.

C. Pay-as-you-go for each partner service:
Incorrect. Although SAP provides consumption-based pricing for certain services, partner solutions are integrated into the broader contractual framework rather than requiring separate pay-as-you-go arrangements.

D. Bring Your Own License (BYOL) for embedded partner solutions:
Incorrect. BYOL exists for some SAP products but is not the model for embedded AI partner solutions, where SAP's strategy emphasizes unified contracting.

References

SAP Learning: "Summarizing Commercial SAP Business AI Solutions Aspects"

Which of the following describes Large Language Models (LLMs)?


A. They rely on traditional rule-based algorithms to generate responses


B. They utilize deep learning to process and generate human-like text


C. They can only process numerical data and are not capable of understanding text


D. They generate responses based on pre-defined templates without learning from data





B.
  They utilize deep learning to process and generate human-like text

Explanation:

Large Language Models (LLMs) are advanced AI systems built on deep learning architectures, specifically transformer neural networks. They are trained on massive datasets of text to understand context, semantics, and linguistic patterns, enabling them to generate coherent, contextually relevant, and human-like responses. LLMs learn from data rather than following rigid rules, allowing them to perform tasks such as summarization, translation, question answering, and code generation without task-specific programming. This deep learning foundation is what distinguishes LLMs from earlier natural language processing approaches.

Why Other Options Are Incorrect

A. They rely on traditional rule-based algorithms to generate responses:
Incorrect because LLMs are data-driven and learn patterns from training data, unlike traditional rule-based systems (e.g., expert systems) that depend on manually coded linguistic rules.

C. They can only process numerical data and are not capable of understanding text:
Incorrect as LLMs are specifically designed to process and generate natural language text, converting words into numerical representations (embeddings) for computation while maintaining semantic understanding.

D. They generate responses based on pre-defined templates without learning from data:
Incorrect because LLMs dynamically generate responses based on learned patterns from training data, unlike template-based systems that simply fill blanks in fixed response structures.

References

IBM: "What are Large Language Models?

What does the Prompt Management feature of the SAP AI launchpad allow users to do?


A. Create and edit prompts


B. Provide personalized user interactions


C. Interact with models through a conversational interface


D. Access and manage saved prompts and their versions





D.
  Access and manage saved prompts and their versions

Explanation

In the SAP AI Launchpad, specifically within the Generative AI Hub, Prompt Management serves as the central repository or "system of record" for prompt engineering assets. Its primary function is the lifecycle management of prompts rather than the initial creation or real-time execution.

Why Other Options are Incorrect

A (Create and edit prompts):
While inherently linked, this is technically the primary function of the Prompt Editor. The Editor is the "workspace" for drafting; Management is the "library" for storing.

B (Provide personalized user interactions):
This is a functional outcome of using AI in a business context, not a specific technical feature of the Prompt Management UI.

C (Interact with models through a conversational interface):
This describes the Chat or Playground feature within the Prompt Editor, where users test model responses in real-time.

Reference

SAP Help Portal: SAP AI Launchpad – Managing Prompts.

Which of the following steps must be performed to deploy LLMs in the generative Al hub?


A. Run the booster
•Create service keys
•Select the executable ID


B. Provision SAP AI Core
•Check for foundation model scenario
•Create a configuration
•Create a deployment


C. Check for foundation model scenario
•Create a deployment
•Configuring entitlements


D. Provision SAP AI
•Core Create a configuration
•Run the booster





B.
  Provision SAP AI Core
•Check for foundation model scenario
•Create a configuration
•Create a deployment

Explanation:

To deploy Large Language Models (LLMs) in the generative AI hub, you must follow a specific sequential process. First, provision SAP AI Core from your SAP BTP cockpit, which generates the necessary service key for authentication . Next, check for the foundation model scenario in your SAP AI Core tenant, as this global AI scenario manages access to all generative AI models . Then, create a configuration where you specify the model provider, model name, version, and other parameters . Finally, create a deployment based on this configuration, which instantiates the LLM and makes it available for consumption via a unique deployment URL .

Why Other Options Are Incorrect

A. Run the booster • Create service keys • Select the executable ID:
Incorrect because while boosters help provision SAP AI Core, selecting an executable ID occurs during configuration creation, and checking for the foundation model scenario is missing entirely.

C. Check for foundation model scenario • Create a deployment • Configuring entitlements:
Incorrect due to critical ordering errors. Configuring entitlements must occur before provisioning SAP AI Core, and you cannot create a deployment without first creating a configuration.

D. Provision SAP AI • Core Create a configuration • Run the booster:
Incorrect because boosters are used to provision SAP AI Core itself, not as a subsequent step after provisioning.

References

SAP Learning: "Getting Started with Generative AI Hub"
SAP Developer Center: "Set up Generative AI Hub in SAP AI Core"

How can few-shot learning enhance LLM performance?


A. By enhancing the model's computational efficiency


B. By providing a large training set to improve generalization


C. By reducing overfitting through regularization techniques


D. By offering input-output pairs that exemplify the desired behavior





D.
  By offering input-output pairs that exemplify the desired behavior

Explanation:

Few-shot learning is a prompt engineering technique used to improve the accuracy and relevance of Large Language Model (LLM) responses without retraining or fine-tuning the underlying model.

Contextual Guidance: By providing a small number (typically 2 to 5) of specific input-output examples within the prompt, the user "shows" the model exactly how to format the response or handle specific logic.

In-Context Learning: The LLM uses these examples to identify patterns and nuances that a zero-shot (no example) prompt might miss. This is particularly effective for sentiment analysis, data extraction, or adhering to a specific corporate brand voice.

Behavior Alignment: It helps the model understand the "desired behavior" for complex tasks, such as converting natural language into a very specific JSON schema or SQL query.

Why Other Options are Incorrect

A (Enhancing computational efficiency):
Few-shot learning actually increases the token count of the prompt, which can slightly increase latency and cost. It optimizes for accuracy, not computational speed.

B (Providing a large training set):
Few-shot uses a very limited number of examples (usually <10). Providing a "large training set" (thousands of examples) refers to Fine-Tuning, which involves updating the model's internal weights.

C (Reducing overfitting through regularization):
Regularization and overfitting are concepts related to the training phase of a model. Few-shot learning occurs during the inference phase (prompting) and does not change the model's structural parameters.

Reference

SAP Help Portal: Generative AI Hub – Prompt Engineering Best Practices.
SAP Learning Journey: Developing with SAP Generative AI Hub (Section: Prompt Engineering Techniques).


Page 1 out of 6 Pages