What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?
A. It allows the LLM to access a larger dataset.
B. It eliminates the need for any training or computational resources.
C. It provides examples in the prompt to guide the LLM to better performance with no training cost.
D. It significantly reduces the latency for each model request.
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Few-shot prompting involves providing a few examples in the prompt to guide the LLM’s
behavior, leveraging its in-context learning ability without requiring retraining or additional
computational resources. This makes Option C correct. Option A is false, as few-shot
prompting doesn’t expand the dataset. Option B overstates the case, as inference still
requires resources. Option D is incorrect, as latency isn’t significantly affected by few-shot
prompting.
OCI 2025 Generative AI documentation likely highlights few-shot prompting in sections on
efficient customization.
Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?
A. They always use an external database for generating responses.
B. They rely on internal knowledge learned during pretraining on a large text corpus.
C. They cannot generate responses without fine-tuning.
D. They use vector databases exclusively to produce answers.
Explanation:
Comprehensive and Detailed In-Depth Explanation:
LLMs without Retrieval Augmented Generation (RAG) depend solely on the knowledge
encoded in their parameters during pretraining on a large, general text corpus. They
generate responses basedon this internal knowledge without accessing external data at
inference time, making Option B correct. Option A is false, as external databases are a
feature of RAG, not standalone LLMs. Option C is incorrect, as LLMs can generate
responses without fine-tuning via prompting or in-context learning. Option D is wrong, as
vector databases are used in RAG or similar systems, not in basic LLMs. This reliance on
pretraining distinguishes non-RAG LLMs from those augmented with real-time retrieval.
OCI 2025 Generative AI documentation likely contrasts RAG and non-RAG LLMs under
model architecture or response generation sections.
In which scenario is soft prompting appropriate compared to other training styles?
A. When there is a significant amount of labeled, task-specific data available
B. When the model needs to be adapted to perform well in a domain on which it was not originally trained
C. When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training
D. When the model requires continued pretraining on unlabeled data
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Soft prompting adds trainable parameters (soft prompts) to adapt an LLM without retraining
its core weights, ideal for low-resource customization without task-specific data. This
makes Option C correct. Option A suits fine-tuning. Option B may require more than soft
prompting (e.g., domain fine-tuning). Option D describes pretraining, not soft prompting.
Soft prompting is efficient for specific adaptations.
OCI 2025 Generative AI documentation likely discusses soft prompting under PEFT
methods.
An AI development company is working on an advanced AI assistant capable of handling queries in a seamless manner. Their goal is to create an assistant that can analyze images provided by users and generate descriptive text, as well as take text descriptions and produce accurate visual representations. Considering the capabilities, which type of model would the company likely focus on integrating into their AI assistant?
A. A diffusion model that specializes in producing complex outputs.
B. A Large Language Model-based agent that focuses on generating textual responses
C. A language model that operates on a token-by-token output basis
D. A Retrieval Augmented Generation (RAG) model that uses text as input and output
Explanation:
Comprehensive and Detailed In-Depth Explanation:
The task requires bidirectional text-image capabilities: analyzing images to generate text
and generating images from text. Diffusion models (e.g., Stable Diffusion) excel at complex
generative tasks, including text-to-image and image-to-text with appropriate extensions,
making Option A correct. Option B (LLM) is text-only. Option C (token-based LLM) lacks
image handling. Option D (RAG) focuses on text retrieval, not image generation. Diffusion
models meet both needs.
OCI 2025 Generative AI documentation likely discusses diffusion models under multimodal
applications.
In the simplified workflow for managing and querying vector data, what is the role of indexing?
A. To convert vectors into a non-indexed format for easier retrieval
B. To map vectors to a data structure for faster searching, enabling efficient retrieval
C. To compress vector data for minimized storage usage
D. To categorize vectors based on their originating data type (text, images, audio)
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Indexing in vector databases maps high-dimensional vectors to a data structure (e.g.,
HNSW,Annoy) to enable fast, efficient similarity searches, critical for real-time retrieval in
LLMs. This makes Option B correct. Option A is backwards—indexing organizes, not deindexes.
Option C (compression) is a side benefit, not the primary role. Option D
(categorization) isn’t indexing’s purpose—it’s about search efficiency. Indexing powers
scalable vector queries.
OCI 2025 Generative AI documentation likely explains indexing under vector database
operations.
What happens if a period (.) is used as a stop sequence in text generation?
A. The model ignores periods and continues generating text until it reaches the token limit.
B. The model generates additional sentences to complete the paragraph.
C. The model stops generating text after it reaches the end of the current paragraph.
D. The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher.
Explanation:
Comprehensive and Detailed In-Depth Explanation:
A stop sequence in text generation (e.g., a period) instructs the model to halt generation
once it encounters that token, regardless of the token limit. If set to a period, the model
stops after the first sentence ends, making Option D correct. Option A is false, as stop
sequences are enforced. Option B contradicts the stop sequence’s purpose. Option C is incorrect, as it stops at the sentence level, not paragraph.
OCI 2025 Generative AI documentation likely explains stop sequences under text
generation parameters.
How are chains traditionally created in LangChain?
A. By using machine learning algorithms
B. Declaratively, with no coding required
C. Using Python classes, such as LLMChain and others
D. Exclusively through third-party software integrations
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Traditionally, LangChain chains (e.g., LLMChain) are created using Python classes that
define sequences of operations, such as calling an LLM or processing data. This
programmatic approach predates LCEL’s declarative style, making Option C correct.
Option A is vague and incorrect, as chains aren’t ML algorithms themselves. Option B
describes LCEL, not traditional methods. Option D is false, as third-party integrations aren’t
required. Python classes provide structured chain building.
OCI 2025 Generative AI documentation likely contrasts traditional chains with LCEL under
LangChain sections.
What does a cosine distance of 0 indicate about the relationship between two embeddings?
A. They are completely dissimilar
B. They are unrelated
C. They are similar in direction
D. They have the same magnitude
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Cosine distance measures the angle between two vectors, where 0 means the vectors
point in the same direction (cosine similarity = 1), indicating high similarity in embeddings’
semantic content—Option C is correct. Option A (dissimilar) aligns with a distance of 1.
Option B is vague—directional similarity matters. Option D (magnitude) isn’t
relevant—cosine ignores magnitude. This is key for semantic comparison.
OCI 2025 Generative AI documentation likely explains cosine distance under vector
database metrics.
Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why arethey crucial for language models?
A. Linear relationships; they simplify the modeling process
B. Semantic relationships; crucial for understanding context and generating precise language
C. Hierarchical relationships; important for structuring database queries
D. Temporal relationships; necessary for predicting future linguistic trends
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Vector databases store embeddings that preserve semantic relationships (e.g., similarity
between "dog" and "puppy") via their positions in high-dimensional space. This accuracy
enables LLMs to retrieve contextually relevant data, improving understanding and
generation, making Option B correct. Option A (linear) is too vague and unrelated. Option C
(hierarchical) applies more to relational databases. Option D (temporal) isn’t the
focus—semantics drives LLM performance. Semantic accuracy is vital for meaningful
outputs.
OCI 2025 Generative AI documentation likely discusses vector database accuracy under
embeddings and RAG.
What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?
A. It specifies a string that tells the model to stop generating more content.
B. It assigns a penalty to frequently occurring tokens to reduce repetitive text.
C. It determines the maximum number of tokens the model can generate per response.
D. It controls the randomness of the model’s output, affecting its creativity.
Explanation:
Comprehensive and Detailed In-Depth Explanation:
The “stop sequence” parameter defines a string (e.g., “.” or “\n”) that, when generated,
halts text generation, allowing control over output length or structure—Option A is correct.
Option B (penalty) describes frequency/presence penalties. Option C (max tokens) is a
separate parameter. Option D (randomness) relates to temperature. Stop sequences
ensure precise termination.
OCI 2025 Generative AI documentation likely details stop sequences under generation
parameters.
How does the structure of vector databases differ from traditional relational databases?
A. It stores data in a linear or tabular format.
B. It is not optimized for high-dimensional spaces.
C. It uses simple row-based data storage.
D. It is based on distances and similarities in a vector space.
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Vector databases store data as high-dimensional vectors (embeddings) and are optimized
for similarity searches using metrics like cosine distance, unlike relational databases, which
use tabular rows and columns for structured data. This makes Option D correct. Options A
and C describerelational databases, not vector ones. Option B is false, as vector databases
are specifically designed for high-dimensional spaces. Vector databases excel in semantic
search and LLM integration.
OCI 2025 Generative AI documentation likely contrasts vector and relational databases
under data storage.
Which LangChain component is responsible for generating the linguistic output in a chatbot system?
A. Document Loaders
B. Vector Stores
C. LangChain Application
D. LLMs
Explanation:
Comprehensive and Detailed In-Depth Explanation:
In LangChain, LLMs (Large Language Models) generate the linguistic output (text
responses) in a chatbot system, leveraging their pre-trained capabilities. This makes
Option D correct. Option A (Document Loaders) ingests data, not generates text. Option B
(Vector Stores) manages embeddings for retrieval, not generation. Option C (LangChain
Application) is too vague—it’s the system, not a specific component. LLMs are the core
text-producing engine.
OCI 2025 Generative AI documentation likely identifies LLMs as the generation component
in LangChain.
Page 2 out of 8 Pages |
Previous |