Topic 1: Part 1
CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs
teachers to create and deliver enrichment courses for high school students. GVC has
learned that many of its teacher employees are using generative Al to create the
enrichment courses, and that many of the students are using generative Al to complete
their assignments.
In particular, GVC has learned that the teachers they employ used open source large
language models (“LLM”) to develop an online tool that customizes study questions for
individual students. GVC has also discovered that an art teacher has expressly
incorporated the use of generative Al into the curriculum to enable students to use prompts
to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use
of generative Al, including by teachers and students, going forward.
All of the following may be copyright risks from teachers using generative Al to create
course content EXCEPT?
A. Content created by an LLM may be protectable under U.S. intellectual property law.
B. Generative Al is generally trained using intellectual property owned by third parties.
C. Students must expressly consent to this use of generative Al.
D. Generative Al often creates content without attribution.
Summary
This question asks you to identify which option is not a copyright risk stemming from teachers using generative AI to create course content. Copyright risk, in this context, involves the potential for infringement of third-party rights or challenges in establishing ownership over the created materials. The key is to distinguish between a direct copyright issue and a separate legal or policy concern, such as privacy or terms of service.
Correct Option
C. Students must expressly consent to this use of generative Al.
Privacy or Policy Issue, Not Copyright:
The requirement for student consent relates to privacy laws (like FERPA in an educational context), academic integrity policies, or terms of service for the AI tool. It is not a matter of copyright law. Copyright law governs the ownership and use of creative works, not whether a user has consented to a specific pedagogical method or tool.
Incorrect Option
A. Content created by an LLM may be protectable under U.S. intellectual property law
This is a significant copyright risk. If the AI-generated content is not copyrightable (as per current U.S. Copyright Office guidance), GVC and its teachers cannot own the course materials they create with the AI. This leaves them unable to prevent others from copying or reselling the content, undermining the commercial value of their courses.
B. Generative Al is generally trained using intellectual property owned by third parties.
This is a core copyright risk. The training data for open-source LLMs often includes copyrighted material scraped from the web. Using a tool trained on this data to create commercial course content could lead to claims of copyright infringement, as the output may be a derivative work based on unlicensed third-party IP.
D. Generative Al often creates content without attribution.
This poses a direct copyright risk. If the AI generates content that is substantially similar to a copyrighted work in its training data and does not provide attribution, it constitutes plagiarism and copyright infringement. GVC could be liable for distributing this unattributed, potentially infringing content to students.
Reference
U.S. Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (March 2023). This official guidance is the primary source for the principle that AI-generated outputs lack human authorship and are not protected by copyright, which directly relates to options A and B. The lack of attribution (Option D) is a foundational principle of copyright law, which grants creators the right to be identified.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to
individuals. ABC has decided to utilize artificial intelligence to streamline and improve its
customer acquisition and underwriting process, including the accuracy and efficiency of
pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose
large language model (“LLM”). In particular, ABC intends to use its historical customer
data—including applications, policies, and claims—and proprietary pricing and risk
strategies to provide an initial qualification assessment of potential customers, which would
then be routed tA. human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a
readiness assessment, and made the decision to deploy the LLM into production. ABC has
designated an internal compliance team to monitor the model during the first month,
specifically to evaluate the accuracy, fairness, and reliability of its output. After the first
month in production, ABC realizes that the LLM declines a higher percentage of women's
loan applications due primarily to women historically receiving lower salaries than men.
Each of the following steps would support fairness testing by the compliance team during
the first month in production EXCEPT?
A. Validating a similar level of decision-making across different demographic groups.
B. Providing the loan applicants with information about the model capabilities and limitations.
C. Identifying if additional training data should be collected for specific demographic groups.
D. Using tools to help understand factors that may account for differences in decisionmaking.
Summary
This question focuses on the specific actions a compliance team can take to conduct fairness testing during the initial monitoring of a deployed AI model. Fairness testing is a technical and analytical process of measuring the model's outcomes for disparate impact across demographic groups. The goal is to identify which action, while potentially valuable in a broader context, does not directly constitute an act of testing for fairness itself.
Correct Option
B. Providing the loan applicants with information about the model capabilities and limitations.
Transparency, Not Testing:
This action is an important element of ethical AI deployment and transparency, falling under the "Communicate" or "Disclose" function of a governance framework. However, it is not a method of testing the model.
Post-Analysis Action:
Providing information to applicants is something done after fairness testing has been conducted and potential issues have been identified or ruled out. It does not involve the technical process of measuring the model's output for bias, which is the core task of the compliance team during this monitoring phase.
Incorrect Option
A. Validating a similar level of decision-making across different demographic groups.
This is the direct definition of fairness testing. It involves statistically analyzing key performance metrics (like approval rates, false positive rates, etc.) for different demographic groups (e.g., men vs. women) to validate that the model's decisions are equitable.
C. Identifying if additional training data should be collected for specific demographic groups.
This is a direct outcome of fairness testing. If the testing reveals a performance disparity for a specific group (e.g., higher decline rates for women), a root cause analysis may point to a lack of representative data for that group in the training set. Identifying this need is a critical part of the testing and diagnosis process.
D. Using tools to help understand factors that may account for differences in decision-making.
This describes the use of explainability AI (XAI) tools, which are essential for fairness testing. These tools help the compliance team move from simply detecting a disparity (e.g., "women are declined more") to diagnosing its cause (e.g., "the model is overly reliant on 'historical salary'"), which is a core analytical step in testing.
Reference
NIST AI RMF 1.0, "MEASURE" Function. This function involves using quantitative and qualitative methods to assess AI system performance and impacts. Validating outcomes across groups (A), diagnosing causes of disparity (D), and identifying data gaps (C) are all explicit activities under measuring for fairness. Providing information to users (B) aligns more closely with the "MANAGE" and "Communicate" functions that follow analysis.
Random forest algorithms are in what type of machine learning model?
A. Symbolic.
B. Generative.
C. Discriminative.
D. Natural language processing.
Summary
This question tests the classification of a fundamental machine learning algorithm. Random Forest is an ensemble method that constructs multiple decision trees during training and outputs a single result, typically the mode of the classes (for classification) or the mean prediction (for regression) of the individual trees. Its primary purpose is to discriminate between different classes or predict a value based on input features, which categorizes it within a major type of ML model.
Correct Option
C. Discriminative.
Focus on Decision Boundaries:
Discriminative models, like Random Forest, learn the conditional probability P(Y|X) – that is, the probability of a label (Y) given the input features (X). They focus on learning the boundaries that separate different classes in the data.
Predictive, Not Descriptive:
Their goal is to discriminate or distinguish between classes to make accurate predictions. A Random Forest classifier does not model how the data was generated; it learns to map inputs to the most likely output, making it a quintessential discriminative model.
Incorrect Option
A. Symbolic.
Symbolic AI (or "Good Old-Fashioned AI") relies on explicit rules and logic representations, such as knowledge graphs and expert systems. Random Forest is a statistical, sub-symbolic model that learns patterns from data, not from pre-programmed rules.
B. Generative.
Generative models learn the joint probability P(X, Y) and can generate new data that resembles the training data. Examples include Generative Adversarial Networks (GANs) and Variational Autoencoders. Random Forest cannot generate new input data; it can only predict labels for existing data, making it discriminative.
D. Natural language processing.
Natural Language Processing (NLP) is an application domain of AI, not a type of machine learning model. While a Random Forest could be used as the classifier in an NLP task (e.g., sentiment analysis), the algorithm itself is a general-purpose discriminative model, not an NLP-specific one.
Reference
The distinction between discriminative and generative models is a foundational concept in machine learning theory. While not always explicitly detailed in high-level framework documents, it is covered in the underlying computer science and statistics knowledge that informs the IAPP AIGP curriculum's understanding of how different algorithms work.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to
individuals. ABC has decided to utilize artificial intelligence to streamline and improve its
customer acquisition and underwriting process, including the accuracy and efficiency of
pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose
large language model (“LLM”). In particular, ABC intends to use its historical customer
data—including applications, policies, and claims—and proprietary pricing and risk
strategies to provide an initial qualification assessment of potential customers, which would then be routed tA. human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a
readiness assessment, and made the decision to deploy the LLM into production. ABC has
designated an internal compliance team to monitor the model during the first month,
specifically to evaluate the accuracy, fairness, and reliability of its output. After the first
month in production, ABC realizes that the LLM declines a higher percentage of women's
loan applications due primarily to women historically receiving lower salaries than men.
The best approach to enable a customer who wants information on the Al model's
parameters for underwriting purposes is to provide?
A. A transparency notice.
B. An opt-out mechanism.
C. Detailed terms of service.
D. Customer service support.
Summary
This question focuses on the appropriate mechanism for fulfilling a customer's right to information about the AI logic involved in a decision that affects them. In the context of insurance underwriting, which is a high-stakes, automated decision-making process, providing a meaningful explanation is a key transparency and fairness requirement. The solution must directly address the customer's request for information about the "model's parameters" in a clear and accessible manner.
Correct Option
A. A transparency notice.
Purpose-Built for Explanation:
A transparency notice (also known as an AI explainability statement or fairness report) is specifically designed to provide individuals with meaningful information about the logic, significance, and consequences of an automated decision-making system.
Addresses the Core Request:
It is the most direct way to inform a customer about the key factors (parameters) the model considers in its underwriting assessment without necessarily revealing proprietary secrets. This fulfills ethical principles and regulatory expectations for explainability in automated decision-making.
Incorrect Option
B. An opt-out mechanism.
An opt-out mechanism allows a customer to choose an alternative process (e.g., a fully human-based review). While this is an important rights-preserving measure, it does not itself provide the information the customer is asking for regarding the model's parameters. It is a choice about how the decision is made, not an explanation of how the AI works.
C. Detailed terms of service.
Terms of service are a legal contract governing the use of a service. They are not a suitable vehicle for providing a clear, accessible explanation of a specific AI model's parameters. Burying this information in a lengthy legal document does not constitute a meaningful or user-friendly transparency effort.
D. Customer service support.
While a customer service representative could be a channel through which a transparency notice is delivered, the support line itself is not the information. Relying solely on a support agent to verbally explain a complex model's parameters is unreliable, inconsistent, and unlikely to provide the comprehensive and accurate details the customer is seeking.
Reference
This aligns with principles in the NIST AI RMF, particularly under the "TRANSPARENCY" and "INTERPRETABILITY & EXPLAINABILITY" characteristics. It also connects to regulatory requirements like those in the EU AI Act for high-risk AI systems, which mandate that users be provided with clear and adequate information about the system's capabilities and limitations. Providing a transparency notice is a recognized best practice for operationalizing these principles.
CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally,
is embarking on a new hiring campaign and wants to implement policies and procedures to
identify and retain the best talent. The new talent will help the company's product team
expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are
concerned that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen
resumes and assist with hiring. They have been talking to several vendors about possibly
obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and
comply with all applicable laws.
The organization has a large procurement team that is responsible for the contracting of
technology solutions. One of the procurement team's goals is to reduce costs, and it often
prefers lower-cost solutions. Others within the company are responsible for integrating and
deploying technology solutions into the organization's operations in a responsible, costeffective
manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate
them. It also questions how best to organize and train its existing personnel to use the Al
hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary
across jurisdictions and continue to change.
Which other stakeholder groups should be involved in the selection and implementation of
the Al hiring tool?
A. Finance and Legal.
B. Marketing and Compliance.
C. Supply Chain and Marketing.
D. Litigation and Product Development.
Summary
The selection of an AI hiring tool is a high-risk decision that extends beyond simple procurement. The case study highlights specific risks: bias in hiring, compliance with varying global laws, and integration into operations. Therefore, the stakeholder groups involved must provide expertise to directly address these risks. The goal is to form a cross-functional team capable of evaluating the tool's legal, financial, operational, and ethical implications, not just its cost.
Correct Option
A. Finance and Legal.
Legal:
This group is essential to assess compliance with employment and anti-discrimination laws across different jurisdictions (especially with expansion into Asia), review vendor contracts for liability and data protection clauses, and evaluate the legal risks of algorithmic bias.
Finance:
While procurement focuses on initial cost, Finance can evaluate the total cost of ownership (TCO), including potential costs of litigation, reputational damage from a biased tool, and the long-term ROI of a more effective, fairer hiring process.
Incorrect Option
B. Marketing and Compliance.
While the Compliance function is critically important and should be involved (making this a partially good choice), Marketing is not a core stakeholder for an internal HR tool. Marketing's focus is external, on customers and brand perception. Their involvement is less direct than that of Legal or Finance, who deal with immediate contractual, financial, and regulatory risks.
C. Supply Chain and Marketing.
Supply Chain manages the flow of physical goods and materials, which is unrelated to the procurement of a software-based AI service for HR. Marketing, as noted above, has no direct operational role in the selection and implementation of an internal hiring system. This combination does not address the core risks identified.
D. Litigation and Product Development.
Litigation is a reactive legal function that handles existing lawsuits. It is more appropriate to involve proactive Legal counsel to prevent litigation. Product Development is focused on building the company's payroll products, not on selecting internal HR systems. Their goals are not aligned with the implementation of a hiring tool.
Reference
NIST AI RMF 1.0, "GOVERN" Function. The framework emphasizes that effective AI risk management requires a cross-functional and multidisciplinary approach. It specifically calls for involving roles with expertise in law, compliance, finance, and the specific business domain (in this case, HR) to ensure risks are identified and managed from all necessary perspectives.
Each of the following actors are typically engaged in the Al development life cycle EXCEPT?
A. Data architects.
B. Government regulators.
C. Socio-cultural and technical experts.
D. Legal and privacy governance experts.
Summary
This question asks you to identify which actor is not a core, internal participant in the typical AI development lifecycle. The lifecycle (e.g., planning, design, development, deployment, monitoring) is primarily carried out by the organization building and deploying the AI system. It involves internal teams and hired experts responsible for the technical, legal, and ethical creation of the system. External entities that set rules and oversee compliance are not part of the internal development team.
Correct Option
B. Government regulators.
External Oversight, Not Internal Development: Government regulators are external entities that create and enforce laws and regulations. They are not part of the organization's internal development team.
Subject to Regulation, Not Participants: While an organization must engage with and design for compliance with regulators, the regulators themselves are not "engaged in" the development process. Their role is to set the boundaries and audit outcomes, not to participate in the lifecycle's execution.
Incorrect Option
A. Data architects.
Data architects are essential technical roles engaged throughout the lifecycle. They design the data infrastructure, define data schemas, and ensure data quality and accessibility for training and running AI models, making them core participants in development.
C. Socio-cultural and technical experts.
These experts are increasingly critical in the AI lifecycle, especially during the planning and testing phases. They help identify and mitigate biases, ensure cultural appropriateness, and assess the broader societal impact of the AI system, aligning with responsible AI practices.
D. Legal and privacy governance experts.
These experts are fundamental stakeholders engaged from the very beginning (planning) through to deployment and monitoring. They ensure the AI system complies with relevant laws, regulations, and internal policies concerning data privacy, intellectual property, and liability.
Reference
NIST AI RMF 1.0, "GOVERN" Function. The framework explicitly calls for a multidisciplinary and cross-functional approach to AI risk management. It lists roles and functions that should be involved, encompassing technical (data architects), legal (governance experts), and socio-technical expertise. Regulators are framed as external authorities to whom organizations are accountable, not as internal team members.
All of the following are penalties and enforcements outlined in the EU Al Act EXCEPT?
A. Fines for SMEs and startups will be proportionally capped.
B. Rules on General Purpose Al will apply after 6 months as a specific provision.
C. The Al Pact will act as a transitional bridge until the Regulations are fully enacted.
D. Fines for violations of banned Al applications will be €35 million or 7% global annual turnover (whichever is higher).
Summary
This question tests specific knowledge of the enforcement mechanisms and transitional provisions within the EU AI Act. The Act outlines a phased implementation timeline with specific dates for different provisions, defines tiered financial penalties for non-compliance, and establishes certain supportive measures. The correct answer is the one that inaccurately describes one of these specific, outlined measures.
Correct Option
B. Rules on General Purpose Al will apply after 6 months as a specific provision.
Incorrect Timeline:
This statement misrepresents the phased implementation timeline of the EU AI Act. The rules for General-Purpose AI (GPAI) models are not set to apply after 6 months. The Act stipulates that most of its provisions, including those for GPAI, will apply 24 months after its entry into force, with certain banned AI applications coming into effect earlier (after 6 months).
Incorrect Option
A. Fines for SMEs and startups will be proportionally capped.
This is a correct provision. The EU AI Act includes administrative fines that are tiered based on the severity of the infringement. For the most serious violations, it does specify lower maximum fine amounts for Small and Medium-sized Enterprises (SMEs) and startups, providing a degree of proportionality.
C. The Al Pact will act as a transitional bridge until the Regulations are fully enacted.
This is a correct statement. The AI Pact is a voluntary initiative encouraged by the European Commission. It calls on AI developers to proactively align with the EU AI Act's key obligations ahead of the legal deadline, serving as a transitional bridge to early compliance.
D. Fines for violations of banned Al applications will be €35 million or 7% global annual turnover (whichever is higher).
This is a correct provision. The EU AI Act establishes a tiered system of administrative fines. For non-compliance with the prohibition of unacceptable risk AI practices (the banned applications), it indeed stipulates fines of up to €35 million or 7% of the company's total worldwide annual turnover, whichever is higher.
Reference
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). The specific timelines for application (including the 6-month deadline for banned applications and the 24-month general application), the tiered fine structure, and the mention of the AI Pact are all outlined in the final text of the Regulation.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to
individuals. ABC has decided to utilize artificial intelligence to streamline and improve its
customer acquisition and underwriting process, including the accuracy and efficiency of
pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose
large language model (“LLM”). In particular, ABC intends to use its historical customer
data—including applications, policies, and claims—and proprietary pricing and risk
strategies to provide an initial qualification assessment of potential customers, which would
then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a
readiness assessment, and made the decision to deploy the LLM into production. ABC has
designated an internal compliance team to monitor the model during the first month,
specifically to evaluate the accuracy, fairness, and reliability of its output. After the first
month in production, ABC realizes that the LLM declines a higher percentage of women's
loan applications due primarily to women historically receiving lower salaries than men.
What is the best strategy to mitigate the bias uncovered in the loan applications?
A. Retrain the model with data that reflects demographic parity.
B. Procure a third-party statistical bias assessment tool.
C. Document all instances of bias in the data set.
D. Delete all gender-based data in the data set.
Summary
The case study identifies a clear case of algorithmic bias where the model perpetuates a historical societal bias (the gender pay gap) by using "historical salary" as a proxy, leading to discriminatory outcomes. The core of the problem is that the training data and the model's learned patterns are flawed. Therefore, the mitigation strategy must address this root cause by fundamentally improving the data and the model's understanding of fairness, not just by measuring or hiding the problem.
Correct Option
A. Retrain the model with data that reflects demographic parity.
Addresses the Root Cause:
This strategy directly tackles the source of the bias. It involves curating a training dataset where the relationship between gender and positive outcomes (loan approval) is not skewed by historical discrimination.
Proactive and Corrective:
This may involve techniques like re-sampling, re-weighting data points, or introducing synthetic data to ensure the model learns a fairer pattern. Retraining on a corrected dataset is the most robust way to "teach" the model to make equitable decisions without relying on discriminatory proxies like historical salary.
Incorrect Option
B. Procure a third-party statistical bias assessment tool.
This is a step for measuring or detecting bias, not for mitigating it. ABC Corp has already uncovered the bias through its internal monitoring. Buying another tool would confirm the problem but would not solve it. The company needs to take corrective action, not just further analysis.
C. Document all instances of bias in the data set.
Documentation is a key governance practice for transparency and accountability, but it is a passive action. Simply documenting the bias does nothing to stop the model from producing discriminatory outcomes. It is a necessary record-keeping step that must be followed by active mitigation.
D. Delete all gender-based data in the data set.
This is a common but flawed approach known as "fairness through blindness." Simply removing the 'gender' column is ineffective because the model can easily infer gender from strong proxies like "historical salary," "profession," or "hobbies." This gives a false sense of fairness while the bias persists through other correlated features.
Reference
NIST AI RMF 1.0, "MAP" and "MANAGE" Functions. The framework emphasizes that identifying biases and underlying causes (Mapping) must be followed by implementing appropriate risk mitigation strategies (Managing). Retraining with fairer data is a primary mitigation tactic. The concept of proxies and the inadequacy of simply removing protected attributes are well-established in AI fairness literature, such as in resources from NIST and academic research on algorithmic bias.
An EU bank intends to launch a multi-modal Al platform for customer engagement and
automated decision-making assist with the opening of bank accounts. The platform has
been subject to thorough risk assessments and testing, where it proves to be effective in
not discriminating against any individual on the basis of a protected class.
What additional obligations must the bank fulfill prior to deployment?
A. The bank must obtain explicit consent from users under the privacy Directive.
B. The bank must disclose how the Al system works under the Ell Digital Services Act.
C. The bank must subject the Al system an adequacy decision and publish its appropriate safeguards.
D. The bank must disclose the use of the Al system and implement suitable measures for users to contest automated decision-making.
Summary
This scenario describes a high-risk AI system as defined by the EU AI Act: an automated system used by a bank to determine access to a fundamental service (opening a bank account). Even if the system is proven non-discriminatory, its classification as high-risk triggers specific, mandatory obligations before it can be deployed. These obligations are focused on transparency and providing fundamental rights safeguards for the individuals affected by the system.
Correct Option
D. The bank must disclose the use of the Al system and implement suitable measures for users to contest automated decision-making.
Core High-Risk Obligations:
For high-risk AI systems, the EU AI Act mandates clear transparency (Article 13) by informing individuals that they are subject to an automated decision. More critically, it requires the implementation of effective human oversight measures (Article 14).
Right to Contest:
This human oversight directly enables the right for users to contest, correct, or seek a human review of the automated decision. This is a fundamental right safeguard that must be built into the system's operation, making it the primary additional obligation beyond the initial risk assessment.
Incorrect Option
A. The bank must obtain explicit consent from users under the privacy Directive.
Consent is a specific legal basis for processing personal data under the GDPR, but it is generally not a valid or required basis for deploying a high-risk AI system under the AI Act. The obligation is to provide transparency and a right to contest, not to seek consent for the system's use. Relying on consent for such a power-imbalanced situation (a bank and a customer) is often considered invalid under GDPR.
B. The bank must disclose how the Al system works under the EU Digital Services Act.
The Digital Services Act (DSA) primarily governs online intermediaries and platforms (like social media and marketplaces), not the specific use of high-risk AI in regulated sectors like banking. The transparency and contestability obligations for this system are directly mandated by the EU AI Act, not the DSA.
C. The bank must subject the Al system an adequacy decision and publish its appropriate safeguards.
An "adequacy decision" is a mechanism under the GDPR for transferring personal data to a country outside the EU. It is unrelated to the process of deploying a high-risk AI system within the EU. "Appropriate safeguards" is also a data transfer term and does not describe the pre-deployment obligations for a high-risk AI system under the AI Act.
Reference
Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 13 (Transparency) and Article 14 (Human Oversight). Annex III explicitly lists "Credit institutions" using AI for "Assessing the creditworthiness of natural persons" as a high-risk system. The obligations for such systems are detailed in Title III, Chapter 2, which mandates robust risk management, transparency, and human oversight measures prior to deployment.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to
individuals. ABC has decided to utilize artificial intelligence to streamline and improve its
customer acquisition and underwriting process, including the accuracy and efficiency of
pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose
large language model (“LLM”). In particular, ABC intends to use its historical customer
data—including applications, policies, and claims—and proprietary pricing and risk
strategies to provide an initial qualification assessment of potential customers, which would
then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a
readiness assessment, and made the decision to deploy the LLM into production. ABC has
designated an internal compliance team to monitor the model during the first month,
specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women's
loan applications due primarily to women historically receiving lower salaries than men.
Which of the following is the most important reason to train the underwriters on the model
prior to deployment?
A. Toprovide a reminder of a right appeal.
B. Tosolicit on-going feedback on model performance.
C. Toapply their own judgment to the initial assessment.
D. Toensure they provide transparency applicants on the model.
Summary
The case study establishes a "human-in-the-loop" system where the AI provides an initial assessment, and a human underwriter makes the final review. The core purpose of this structure is to use human judgment as a critical control to catch and correct potential errors or biases from the AI. For the human to perform this role effectively, they must be trained to understand the AI's capabilities, limitations, and typical failure modes. This training is essential for the human to provide meaningful oversight.
Correct Option
C. To apply their own judgment to the initial assessment.
Enabling Effective Human Oversight:
The primary reason to train the underwriters is to empower them to effectively exercise their judgment. They need to understand how the model works, what factors it considers, and, crucially, its known limitations (like a potential reliance on historical salary data) to properly evaluate its recommendations.
Mitigating Automation Bias:
Without training, humans tend to over-trust AI outputs, a phenomenon known as automation bias. Training equips the underwriters to critically assess the AI's suggestion rather than rubber-stamping it, which is the entire point of having a human in the final review loop.
Incorrect Option
A. To provide a reminder of a right to appeal.
Informing customers of their right to appeal is a separate transparency and compliance obligation. It is not the most important reason for training the underwriters. This information would be provided to the applicant directly, not be the focus of internal staff training on the model itself.
B. To solicit on-going feedback on model performance.
While collecting feedback from underwriters is a valuable practice for continuous monitoring, it is a secondary benefit of their involvement. The most important reason for initial training is to enable them to do their core job correctly from the start, not to turn them into data sources for model improvement.
D. To ensure they provide transparency to applicants on the model.
The responsibility to provide transparency to applicants about the use of AI is an organizational duty that should be managed through standardized processes and communications (like a transparency notice). It is not the primary responsibility of the individual underwriter, nor is it the main goal of training them on the model's mechanics.
Reference
NIST AI RMF 1.0, "GOVERN" and "MANAGE" Functions. The framework emphasizes human oversight as a key governance mechanism. It states that organizations should "ensure that human oversight is built into the lifecycle... and that overseers are adequately trained and empowered." Training is explicitly mentioned as necessary for humans to effectively exercise their judgment and authority over AI systems.
Which of the following best defines an "Al model"?
A. A system that applies defined rules to execute tasks.
B. A system of controls that is used to govern an Al algorithm.
C. A corpus of data which an Al algorithm analyzes to make predictions.
D. A program that has been trained on a set of data to find patterns within the data.
Summary
This question tests the fundamental definition of an "AI model." It is crucial to distinguish the model itself from the data it uses, the rules it might follow, or the governance controls around it. An AI model is the specific output of a training process, representing a mathematical function that has learned to map inputs to outputs based on patterns discovered in data, enabling it to make predictions or decisions on new, unseen data.
Correct Option
D. A program that has been trained on a set of data to find patterns within the data.
The Output of Training:
This definition accurately describes an AI model as the product of a machine learning process. The "program" refers to the algorithm and its resulting parameters (weights and biases).
Core Function is Pattern Recognition:
The essential characteristic is that it has been trained to find patterns. This differentiates it from a traditional, rule-based program. Once trained, this model can use the discovered patterns to perform tasks like classification or prediction on new data.
Incorrect Option
A. A system that applies defined rules to execute tasks.
This describes a traditional, rule-based software program or an expert system, not a typical AI model. AI models learn their own implicit "rules" (patterns) from data; they are not explicitly programmed with a fixed set of if-then statements.
B. A system of controls that is used to govern an Al algorithm.
This defines an AI governance framework or a management system. It includes policies, procedures, and tools for risk management and oversight. This is the context in which a model operates, not the model itself.
C. A corpus of data which an Al algorithm analyzes to make predictions.
This defines the training dataset. The data is the input used to create the model. The model is the output—the learned representation or function—not the input data itself.
Reference
National Institute of Standards and Technology (NIST) AI RMF 1.0. While the RMF does not provide a single-sentence definition, its entire framework is predicated on this understanding. It treats an AI model as a component of an AI system that is developed through a training process and is used to make predictions or decisions. This aligns perfectly with Option D, distinguishing the trained model from its data, its governing controls, and non-learning-based software.
According to the Singapore Model Al Governance Framework, all of the following are recommended measures to promote the responsible use of Al EXCEPT?
A. Determining the level of human involvement in algorithmic decision-making.
B. Adapting the existing governance structure algorithmic decision-making.
C. Employing human-over-the-loop protocols for high-risk systems.
D. Establishing communications and collaboration among stakeholders.
Summary
This question tests knowledge of the specific recommendations within the Singapore Model AI Governance Framework. This framework is known for its practical, risk-based approach. A key principle is that the level of human oversight (e.g., in-the-loop, on-the-loop, over-the-loop) should be proportionate to the risk posed by the AI system. The framework advises against a one-size-fits-all mandate, especially prescribing a specific, high-intensity oversight model for an entire category of systems.
Correct Option
C. Employing human-over-the-loop protocols for high-risk systems.
Proportionality Over Prescription:
The Singapore Framework explicitly advocates for a proportional approach to human oversight. It recommends selecting the appropriate type of involvement (in, on, or over-the-loop) based on a risk assessment, rather than mandating the most intensive form ("human-in-the-loop") for all high-risk systems.
"Over-the-Loop" Defined:
"Human-over-the-loop" typically involves periodic monitoring and auditing, which may be insufficient for certain high-risk, real-time decisions. The framework would recommend that for such cases, a more direct "human-in-the-loop" control is necessary. Therefore, prescribing "over-the-loop" for all high-risk systems contradicts the framework's risk-proportionate philosophy.
Incorrect Option
A. Determining the level of human involvement in algorithmic decision-making.
This is a core recommendation. The framework emphasizes that organizations should consciously decide and document the appropriate level of human involvement (e.g., in-the-loop, on-the-loop, over-the-loop) based on the system's risk and impact.
B. Adapting the existing governance structure for algorithmic decision-making.
This is a fundamental principle of the framework. It encourages organizations to build upon their existing governance structures (like risk and ethics committees) to oversee AI, rather than creating entirely new, separate structures from scratch.
D. Establishing communications and collaboration among stakeholders.
This is a key enabler for responsible AI. The framework highlights the importance of internal communication (among teams like legal, IT, and business) and external communication (with users and regulators) to ensure a holistic approach to governance.
Reference
Singapore Personal Data Protection Commission (PDPC), "Model AI Governance Framework" (Second Edition). The framework's section on "Operations Management" discusses human oversight, stating that the "appropriate level of human involvement in AI-augmented decision-making varies depending on the potential impact of the AI decision." It avoids mandating a single protocol like "human-over-the-loop" for high-risk systems, instead advocating for a proportional approach determined by the organization.
| Page 1 out of 9 Pages |
| 123 |
Real-World Scenario Mastery: Our AIGP practice exam don't just test definitions. They present you with the same complex, scenario-based problems you'll encounter on the actual exam.
Strategic Weakness Identification: Each practice session reveals exactly where you stand. Discover which domains need more attention, before Artificial Intelligence Governance Professional exam day arrives.
Confidence Through Familiarity: There's no substitute for knowing what to expect. When you've worked through our comprehensive AIGP practice exam questions pool covering all topics, the real exam feels like just another practice session.