Free AIGP Practice Test Questions 2026

100 Questions


Last Updated On : 13-Mar-2026


Topic 1: Part 1

According to the GDPR, what is an effective control to prevent a determination based solely on automated decision-making?


A. Provide a just-in-time notice about the automated decision-making logic.


B. Define suitable measures to safeguard personal data.


C. Provide a right to review automated decision.


D. Establish a human-in-the-loop procedure.





D.
  Establish a human-in-the-loop procedure.

Explanation: The GDPR requires that individuals have the right to not be subject to decisions based solely on automated processing, including profiling, unless specific exceptions apply. One effective control is to establish a human-in-the-loop procedure (D), ensuring human oversight and the ability to contest decisions. This goes beyond just-intime notices (A), data safeguarding (B), or review rights (C), providing a more robust mechanism to protect individuals' rights.

A US company has developed an Al system, CrimeBuster 9619, that collects information about incarcerated individuals to help parole boards predict whether someone is likely to commit another crime if released from prison.
When considering expanding to the EU market, this type of technology would?


A. Require the company to register the tool with the EU database.


B. Be subject approval by the relevant EU authority.


C. Require a detailed conformity assessment.


D. Be banned under the EU Al Act.





C.
  Require a detailed conformity assessment.

What type of organizational risk is associated with Al's resource-intensive computing demands?


A. People risk.


B. Security risk.


C. Third-party risk.


D. Environmental risk.





D.
  Environmental risk.

Explanation: AI's resource-intensive computing demands pose significant environmental risks. High-performance computing required for training and deploying AI models often leads to substantial energy consumption, which can result in increased carbon emissions and other environmental impacts. This is particularly relevant given the growing concern over climate change and the environmental footprint of technology. Organizations need to consider these environmental risks when developing AI systems, potentially exploring more energy-efficient methods and renewable energy sources to mitigate the environmental impact.

CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
What is the best reason for GVC to offer students the choice to utilize generative Al in limited, defined circumstances?


A. Toenable students to learn how to manage their time.


B. Toenable students to learn about performing research.


C. Toenable students to learn about practical applications of Al.


D. Toenable students to learn how to use Al as a supportive educational tool.





D.
  Toenable students to learn how to use Al as a supportive educational tool.

According to the GDPR, an individual has the right to have a human confirm or replace an automated decision unless that automated decision?


A. Is authorized with the data subject s explicit consent.


B. Is authorized by applicable Ell law and includes suitable safeguards.


C. Is deemed to solely benefit the individual and includes documented legitimate interests.


D. Is necessary for entering into or performing under a contract between the data subject and data controller.





A.
  Is authorized with the data subject s explicit consent.

What is the primary reason the EU is considering updates to its Product Liability Directive?


A. To increase the minimum warranty level for defective goods.


B. To define new liability exemptions for defective products.


C. Address digital services and connected products.


D. Address free and open-source software.





C.
  Address digital services and connected products.

Explanation: The primary reason the EU is considering updates to its Product Liability Directive is to address digital services and connected products. The current directive does not adequately cover the complexities and challenges posed by modern digital and connected technologies. By updating the directive, the EU aims to ensure that it remains relevant and effective in addressing the liabilities associated with these advanced products, ensuring consumer protection and fair market practices in the digital age.

Which of the following disclosures is NOT required for an EU organization that developed and deployed a high-risk Al system?


A. The human oversight measures employed.


B. How an individual may contest a decision.


C. The location(s) where data is stored.


D. The fact that an Al system is being used.





C.
  The location(s) where data is stored.

CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.
The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company are responsible for integrating and deploying technology solutions into the organization's operations in a responsible, costeffective manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.
Which of the following measures should XYZ adopt to best mitigate its risk of reputational harm from using the Al tool?


A. Test the Al tool pre- and post-deployment.


B. Ensure the vendor assumes responsibility for all damages.


C. Direct the procurement team to select the most economical Al tool.


D. Continue to require XYZ's hiring personnel to manually screen all applicants.





A.
  Test the Al tool pre- and post-deployment.

Explanation: To mitigate the risk of reputational harm from using an AI hiring tool, XYZ Corp should rigorously test the AI tool both before and after deployment. Pre-deployment testing ensures the tool works correctly and does not introduce bias or other issues. Postdeployment testing ensures the tool continues to operate as intended and adapts to any changes in data or usage patterns. This approach helps to identify and address potential issues proactively, thereby reducing the risk of reputational harm. Ensuring the vendor assumes responsibility for damages (B) does not address the root cause of potential issues, selecting the most economical tool (C) may compromise quality, and continuing manual screening (D) defeats the purpose of using the AI tool.

CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.
The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company are responsible for integrating and deploying technology solutions into the organization's operations in a responsible, costeffective manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.
If XYZ does not deploy and use the Al hiring tool responsibly in the United States, its liability would likely increase under all of the following laws EXCEPT?


A. Anti-discriminationlaws.


B. Product liability laws.


C. Accessibility laws.


D. Privacy laws.





B.
  Product liability laws.

Explanation: In the United States, the use of AI hiring tools must comply with anti-discrimination laws, accessibility laws, and privacy laws to avoid increasing liability. Anti-discrimination laws (A) ensure that hiring practices do not unlawfully discriminate against protected classes.
Accessibility laws (C) require that hiring tools are accessible to all applicants, including those with disabilities. Privacy laws (D) govern the handling of personal data during the hiring process. Product liability laws (B), however, typically apply to the safety and reliability of physical products and would not generally increase liability specifically related to the responsible use of AI hiring tools in the employment context.

CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
Which of the following risks should be of the highest concern to individual teachers using generative Al to ensure students learn the course material?


A. Financial cost.


B. Model accuracy.


C. Technical complexity.


D. Copyright infringement.





B.
  Model accuracy.

Explanation: The highest concern for individual teachers using generative AI to ensure students learn the course material is model accuracy. Ensuring that the AI-generated content is accurate and relevant to the curriculum is crucial for effective learning. If the AI model produces inaccurate or irrelevant content, it can mislead students and hinder their understanding of the subject matter.
Reference: According to the AIGP Body of Knowledge, one of the core risks posed by AI systems is the accuracy of the data and models used. Ensuring the accuracy of AIgenerated content is essential for maintaining the integrity of the educational material and achieving the desired learning outcomes.

What is the key feature of Graphical Processing Units (GPUs) that makes them well-suited to running Al applications?


A. GPUs run many tasks concurrently, resulting in faster processing.


B. GPUs can access memory quickly, resulting in lower latency than CPUs.


C. GPUs can run every task on a computer, making them more robust than CPUs.


D. The number of transistors on GPUs doubles every two years, making thechips smaller and lighter.





A.
  GPUs run many tasks concurrently, resulting in faster processing.

Explanation: GPUs (Graphical Processing Units) are well-suited to running AI applications due to their ability to run many tasks concurrently, which significantly enhances processing speed. This parallel processing capability makes GPUs ideal for handling the large-scale computations required in AI and deep learning tasks. Reference: AIGP BODY OF KNOWLEDGE, which explains the importance of compute infrastructure in AI applications.

Under the Canadian Artificial Intelligence and Data Act, when must the Minister of Innovation, Science and Industry be notified about a high-impact Al system?


A. When use of the system causes or is likely to cause material harm.


B. When the algorithmic impact assessment has been completed.


C. Upon release of a new version of the system.


D. Upon initial deployment of the system.





D.
  Upon initial deployment of the system.

Explanation: According to the Canadian Artificial Intelligence and Data Act, high-impact AI systems must notify the Minister of Innovation, Science and Industry upon initial deployment. This requirement ensures that the authorities are aware of the deployment of significant AI systems and can monitor their impacts and compliance with regulatory standards from the outset. This initial notification is crucial for maintaining oversight and ensuring the responsible use of AI technologies. Reference: AIGP Body of Knowledge, domain on AI laws and standards.


Page 3 out of 9 Pages
PreviousNext
234
AIGP Practice Test Home

What Makes Our Artificial Intelligence Governance Professional Practice Test So Effective?

Real-World Scenario Mastery: Our AIGP practice exam don't just test definitions. They present you with the same complex, scenario-based problems you'll encounter on the actual exam.

Strategic Weakness Identification: Each practice session reveals exactly where you stand. Discover which domains need more attention, before Artificial Intelligence Governance Professional exam day arrives.

Confidence Through Familiarity: There's no substitute for knowing what to expect. When you've worked through our comprehensive AIGP practice exam questions pool covering all topics, the real exam feels like just another practice session.