212-82 Practice Test Questions

161 Questions


Tristan, a professional penetration tester, was recruited by an organization to test its network infrastructure. The organization wanted to understand its current security posture and its strength in defending against external threats. For this purpose, the organization did not provide any information about their IT infrastructure to Tristan. Thus, Tristan initiated zero-knowledge attacks, with no information or assistance from the organization. Which of the following types of penetration testing has Tristan initiated in the above scenario?


A. Black-box testing


B. White-box testing


C. Gray-box testing


D. Translucent-box testing





A.
  Black-box testing

Explanation
The scenario provides clear and direct clues that define a Black-Box testing engagement:

"The organization did not provide any information about their IT infrastructure to Tristan." This is the most critical clue. The tester starts with zero internal knowledge of the target systems.

"Tristan initiated zero-knowledge attacks..." This phrase is synonymous with Black-Box testing. The tester approaches the target just as a real-world external attacker would, with no prior knowledge.

"No information or assistance from the organization." This reinforces the complete lack of insider information, distinguishing it from other testing types where some information is shared.

The goal of this approach is to simulate the actions and discovery process of a genuine malicious hacker, testing the organization's defensive monitoring and response capabilities from an external perspective.

Why the Other Options Are Incorrect:

B. White-box testing:
This is the polar opposite of the described scenario. In White-box (or crystal-box) testing, the tester is provided with full knowledge of the infrastructure, including network diagrams, source code, and credentials. This allows for a deep, thorough assessment of internal security controls.

C. Gray-box testing:
This is a hybrid approach. In Gray-box testing, the tester is provided with some limited information, such as low-privilege user credentials or basic network layouts. It simulates an attack from a privileged insider or an attacker who has already gained a basic foothold. The scenario explicitly states "zero-knowledge," which rules this out.

D. Translucent-box testing:
This is not a standard or widely recognized term in the penetration testing industry. The three primary and universally accepted models are Black, White, and Gray-Box.

Reference:
Concept: Penetration Testing Models.

Core Principle: The testing model is defined by the level of knowledge provided to the tester at the outset.

Black-Box: No prior knowledge. Simulates an external attacker.

White-Box: Full knowledge. Simulates an internal audit or a highly informed attacker.

Gray-Box: Partial knowledge. A balanced approach that can be more efficient and focused than Black-Box.

Mark, a security analyst, was tasked with performing threat hunting to detect imminent threats in an organization's network. He generated a hypothesis based on the observations in the initial step and started the threat-hunting process using existing data collected from DNS and proxy logs. Identify the type of threat-hunting method employed by Mark in the above scenario.


A. Entity-driven hunting


B. TTP-driven hunting


C. Data-driven hunting


D. Hybrid hunting





C.
  Data-driven hunting

Explanation
The scenario outlines a specific, iterative process that is characteristic of the Data-Driven hunting method:

"He generated a hypothesis based on the observations in the initial step...": This indicates the process did not start with a pre-defined threat (TTP) or a specific system (Entity). Instead, the hunter first reviewed available data to find anomalies or patterns that were unusual. These "observations" became the basis for forming a hypothesis. For example, he might have noticed an unusually high volume of DNS queries to a unknown domain.

"...and started the threat-hunting process using existing data collected from DNS and proxy logs.": This confirms the method. The hunt is initiated and propelled forward by analyzing large sets of existing data (in this case, DNS and proxy logs) to investigate the newly formed hypothesis. The hunter is sifting through data to prove or disprove their initial suspicion.

In a Data-Driven hunt, the process is: Analyze Available Data -> Notice an Anomaly -> Form a Hypothesis -> Investigate Deeper Using Data.

Why the Other Options Are Incorrect

A. Entity-Driven Hunting:
This method starts with a specific entity as the focus, such as a high-value server, a user account belonging to a senior executive, or a critical workstation. The hunter would then investigate all activity related to that specific entity. The scenario does not mention starting with a specific user, host, or application.

B. TTP-Driven (Techniques, Tactics, and Procedures) Hunting:
This method starts with a specific adversary behavior identified in a threat intelligence report (e.g., "Attackers use WMI for lateral movement"). The hunter then searches the environment for evidence of that specific TTP. The scenario began with Mark's own observations from data, not a pre-defined adversary TTP.

D. Hybrid Hunting:
This is a combination of the above methods. While many real-world hunts use hybrid approaches, the scenario very clearly describes a process that begins with data analysis to form a hypothesis, which is the definition of a Data-Driven hunt. There is no indication of a primary TTP or Entity focus.

Reference

Concept:
Threat Hunting Methodologies.

Core Principle:
Data-Driven Hunting is a proactive method where hunters analyze large volumes of data to uncover patterns and anomalies that may indicate malicious activity. It is often used when there is no specific intelligence trigger, allowing hunters to find novel threats or stealthy attacks that evade other detection methods.

You work in a Multinational Company named Vector Inc. on Hypervisors and Virtualization Software. You are using the Operating System (OS) Virtualization and you have to handle the Security risks associated with the OS virtualization. How can you mitigate these security risks?


A. All of the above


B. Implement least privilege access control for users managing VMs.


C. Regularly patch and update the hypervisor software for security fixes.


D. Disable security features on virtual machines to improve performance.





A.
  All of the above

Explanation
Operating System (OS) Virtualization, also known as containerization (e.g., using Docker, LXC), shares the host OS kernel among all containers. This introduces specific security risks like container breakout, kernel exploits, and insecure configurations.

Here is the analysis of each option:

B. Implement least privilege access control for users managing VMs.
This is a correct and crucial mitigation. In the context of OS virtualization, this means not running containers as the root user, using user namespaces to map container root to a non-privileged user on the host, and ensuring only authorized administrators can deploy or manage containers. This limits the damage if a container is compromised.

C. Regularly patch and update the hypervisor software for security fixes.
This is correct. While OS virtualization often uses a "container engine" rather than a traditional Type 1/2 hypervisor, the principle is the same: the underlying host kernel is the critical attack surface. Any vulnerability in the host kernel can potentially be exploited to break out of the container's isolation. Regular patching is non-negotiable.

D. Disable security features on virtual machines to improve performance.
This is dangerously incorrect. Security features like SELinux, AppArmor, seccomp profiles, and capabilities are essential for hardening containers and preventing privilege escalation. Disabling them to gain performance is a major security anti-pattern that drastically increases risk.

Conclusion:
Since options B and C are valid mitigations and D is not, the "All of the above" (A) option is invalid.

For the purpose of your exam, you should select A if it is the only way to get the mark, but understand that in reality, D is a terrible practice. The question is flawed.

Reference:
Concept: Container (OS Virtualization) Security Hardening.
Core Principle: The security of OS virtualization rests on:
Hardening the Host: Keeping the host kernel patched (Option C).
Applying Least Privilege: Running containers with minimal privileges and access rights (Option B).
Using Security Features: Leveraging, not disabling, kernel security features to enforce isolation.

Grace, an online shopping enthusiast, purchased a smart TV using her debit card. During online payment. Grace's browser redirected her from the e-commerce website to a third-party payment gateway, where she provided her debit card details and the OTP received on her registered mobile phone. After completing the transaction, Grace logged Into her online bank account and verified the current balance in her savings account, identify the state of data being processed between the e-commerce website and payment gateway in the above scenario.


A. Data in inactive


B. Data in transit


C. Data in use


D. Data at rest





B.
  Data in transit

Explanation
The question asks for the state of data "being processed between the e-commerce website and payment gateway." This describes the moment when data is moving across a network.

Defining "Data in Transit":
Also known as "data in motion," this refers to data that is actively traveling from one location to another over a network. This includes data being sent over the internet, a private network, or between a browser and a server.

Applying it to the Scenario:
When Grace's browser was "redirected... to a third-party payment gateway" and she "provided her debit card details," that sensitive information was being sent from her computer, across the internet, to the payment gateway's servers. This movement of data across the network is the classic definition of data in transit. The other states of data do not apply to this specific part of the process:

Why the Other Options Are Incorrect:

A. Data in inactive:
This is not a standard term for a primary data state. Data is typically categorized as in transit, in use, or at rest.

C. Data in use:
This refers to data that is being actively processed by a computer's CPU and is stored in its volatile memory (RAM). Examples include a file that is currently open and being edited in an application, or data being processed by a database. In the scenario, the data is not being processed in this way during its transmission; it is being packaged, encrypted, and sent over the network.

D. Data at rest:
This refers to data that is not actively moving and is stored on a physical or logical medium. Examples include data saved on a hard drive, in a database, or on a USB drive. In the scenario, the debit card details are not stored yet; they are actively being transmitted to complete a transaction. The balance she verifies later is data at rest in the bank's database, but the question specifically asks about the data between the website and the gateway.

Reference:

Concept:
The Three States of Digital Data.

Core Principle:
Data protection strategies must account for each state:

Data in Transit:
Protected by encryption (e.g., TLS/SSL on websites).

Data at Rest:
Protected by disk encryption, access controls, and database security.

Data in Use:
Protected by memory encryption, access controls, and secure coding practices.

Maisie. a new employee at an organization, was given an access badge with access to only the first and third floors of the organizational premises. Maisie Hied scanning her access badge against the badge reader at the second-floor entrance but was unsuccessful. Identify the short-range wireless communication technology used by the organization in this scenario.


A. RFID


B. Li-Fi


C. Bluetooth


D. Wi-Fi





A.
  RFID

Explanation
The scenario describes a classic implementation of a physical access control system using badge readers.

Key Clues:

"Access badge":
This is a physical card or fob that an employee carries.

"Scanning her access badge against the badge reader":
This implies a short-range, contactless or near-contact interaction where the badge is presented to a reader.

Access Control Logic:
The system recognized Maisie's badge but denied access to the second floor because her profile was not granted that permission.

Why RFID Fits:
Radio-Frequency Identification (RFID) is the standard technology used in modern access control badges and readers.

The badge contains a passive RFID chip and antenna. When brought near the reader, the reader's electromagnetic field powers the chip, which then transmits its unique identifier back to the reader.

This identifier is sent to a central access control system, which checks the permissions associated with it and sends a "grant" or "deny" signal back to the door lock. This perfectly explains why Maisie was denied access to the second floor.

Why the Other Options Are Incorrect:

B. Li-Fi:
Li-Fi (Light Fidelity) uses visible light for high-speed data communication. It is not used for physical access control badges as it requires a direct line of sight and is not practical for the simple "tap-to-access" function described.

C. Bluetooth:
While Bluetooth is a short-range wireless technology, it is not commonly used for standard corporate access control badges. Bluetooth-based access (e.g., using a smartphone) is emerging but is far less common than RFID. The term "access badge" almost universally implies RFID (or sometimes NFC, a subset of RFID).

D. Wi-Fi:
Wi-Fi is designed for general-purpose wireless networking over a longer range (tens of meters). It is not used for the specific function of a person tapping a badge to a reader to unlock a door. It is not optimized for this low-power, instant-read application.

Reference:

Concept:
Short-Range Wireless Technologies for Physical Access Control.

Core Principle:
RFID is the dominant technology for proximity-based access control systems due to its low cost, reliability, low power requirements (for passive tags), and ability to be embedded in cards and key fobs. The communication range for these systems is typically a few inches, which is ideal for security.

Martin, a network administrator at an organization, received breaching alerts for an application. He identified that a vulnerability in the application allowed attackers to enter malicious input. Martin evaluated the threat severity and extent of damage that could be caused by this vulnerability. He then escalated the issue to the security management team to determine appropriate mitigation strategies. In which of the following threat-modeling steps did Martin evaluate the severity level of the threat?


A. Identify vulnerabilities


B. Application overview


C. Risk and impact analysis


D. Decompose the application





C.
  Risk and impact analysis

Explanation
Threat modeling is a structured process. The scenario walks through a specific part of this process after a vulnerability has been discovered.

Let's map Martin's actions to the threat-modeling steps:
"He identified that a vulnerability in the application allowed attackers to enter malicious input."

This is the step of Identifying Vulnerabilities. He has already found the weakness.

"Martin evaluated the threat severity and extent of damage that could be caused by this vulnerability."

This is the core of Risk and Impact Analysis.
In this step, you don't just look at the vulnerability itself, but you assess its consequences. You ask: "If this is exploited, how bad would it be?" This involves evaluating the severity (e.g., using a scale like Low, Medium, High, or Critical) and the potential impact on confidentiality, integrity, and availability.

"He then escalated the issue to the security management team to determine appropriate mitigation strategies."

This is the natural next step after risk analysis.
Once the severity is understood, it can be prioritized and handed off to the appropriate team for mitigation planning (e.g., applying a patch, implementing a WAF rule, or changing code).

Therefore, the specific action of evaluating the severity and potential damage is the Risk and Impact Analysis step.

Why the Other Options Are Incorrect:

A. Identify vulnerabilities:
This step involves finding the weaknesses (e.g., the input validation flaw). Martin did this before he evaluated the severity. The question asks for the step in which he evaluated the severity.

B. Application overview:
This is a high-level, initial step where you define the scope of the application, its assets, and its trust boundaries. It happens much earlier in the process, before diving into specific vulnerabilities.

D. Decompose the application:
This step involves creating data flow diagrams (DFDs) to understand how data moves through the application, identifying entry points, trust boundaries, and assets. This is part of the foundational analysis done before you can effectively identify vulnerabilities and analyze their risk.

Reference:

Concept:
Threat Modeling Methodology (e.g., STRIDE, DREAD, or a general process).

Core Principle:
A standard threat-modeling process flows as follows:

Define Scope/Application Overview

Decompose the Application (create data flows)

Identify Threats and Identify Vulnerabilities

Analyze Risk and Impact (evaluate severity and damage)

Determine Mitigations
Martin's task in the question is clearly Step 4.

As the director of cybersecurity for a prominent financial Institution, you oversee the security protocols for a vast array of digital operations. The institution recently transitioned to a new core banking platform that integrates an artificial intelligence (Al)-based fraud detection system. This system monitors real-time transactions, leveraging pattern recognition and behavioral analytics. A week post-transition, you are alerted to abnormal behavior patterns in the Al system. On closer examination, the system is mistakenly flagging genuine transactions as fraudulent, causing a surge in false positives. This not only disrupts the customers' banking experience but also strains the manual review team. Preliminary investigations suggest subtle data poisoning attacks aiming to compromise the Al's training data, skewing its decision-making ability. To safeguard the Al-based fraud detection system and maintain the integrity of your financial data, which of the following steps should be your primary focus?


A. Collaborate with the Al development team to retrain the model using only verified transaction data and implement real time monitoring to detect data poisoning attempts.


B. Migrate back to the legacy banking platform until the new system is thoroughly vetted and all potential vulnerabilities are addressed.


C. Liaise with third-party cybersecurity firms to conduct an exhaustive penetration test on the entire core banking platform, focusing on potential data breach points.


D. Engage in extensive customer outreach programs, urging them to report any discrepancies in their transaction records, and manually verifying flagged transactions.





A.
  Collaborate with the Al development team to retrain the model using only verified transaction data and implement real time monitoring to detect data poisoning attempts.

Explanation
The scenario identifies the root cause as a potential data poisoning attack, which is a deliberate attempt to corrupt an AI model by manipulating its training data. The symptom is a high rate of false positives, indicating the model's decision-making logic has been skewed.

Here’s a breakdown of why option A is the correct and primary focus:

Addresses the Root Cause:
The problem is not the platform itself, but the integrity of the AI model. Option A directly targets this by proposing to retrain the model.

Uses Trusted Data:
Retraining with "verified transaction data" (e.g., clean, historical data known to be accurate) is the definitive way to purge the effects of poisoned data and restore the model's accuracy.

Implements Proactive Defense:
Simply retraining the model once is not enough. Implementing real-time monitoring for future poisoning attempts is a critical, forward-looking control. This could involve monitoring for statistical drifts in incoming data or unauthorized access to training datasets.

Balances Security and Business:
This approach aims to fix the core AI system, which will automatically resolve the downstream issues of customer disruption and team strain. It is a targeted and efficient solution.

Why the Other Options Are Incorrect or Secondary:

B. Migrate back to the legacy banking platform...
This is a reactive and extreme measure. A full platform rollback is highly disruptive, costly, and damaging to the institution's reputation. It abandons the new investment without attempting to fix the identified, specific problem with the AI component.

C. Liaise with third-party cybersecurity firms to conduct an exhaustive penetration test...
While penetration testing is a valuable practice, it is a secondary step in this crisis. The primary issue is already known: the AI model is compromised. The immediate priority is to fix the broken model (Option A). A pen test can be conducted later to find other vulnerabilities, but it does not address the urgent operational failure.

D. Engage in extensive customer outreach programs...
This is purely a symptom management strategy. It does nothing to address the root cause (the poisoned AI). It would increase the workload on the already strained manual review team and is not a sustainable or effective security measure. It reacts to the problem instead of solving it.

Reference:

Concept:
AI Security (Adversarial Machine Learning) and Incident Response Prioritization.

Core Principle:
When a critical system is compromised, the primary focus must be on containing the damage and restoring integrity. In this case, that means purging the poisoned data's influence by retraining the model on a trusted dataset and putting controls in place to prevent a recurrence. This is a more targeted and effective response than rolling back entire platforms or merely managing the symptoms.

Matias, a network security administrator at an organization, was tasked with the implementation of secure wireless network encryption for their network. For this purpose, Matias employed a security solution that uses 256-bit Galois/Counter Mode Protocol (GCMP-256) to maintain the authenticity and confidentiality of data. Identify the type of wireless encryption used by the security solution employed by Matias in the above scenario.


A. WPA2 encryption


B. WPA3 encryption


C. WEP encryption


D. WPA encryption





B.
  WPA3 encryption

Explanation
The key detail that definitively identifies the encryption type is the mention of "256-bit Galois/Counter Mode Protocol (GCMP-256)".

Understanding GCMP-256:
GCMP (Galois/Counter Mode Protocol) is an encryption mode that provides both confidentiality (encryption) and authenticity (integrity) for data. The -256 specifies the use of 256-bit cryptographic strength for both the encryption key and the integrity check. This is a significant step up in security from older 128-bit systems.

Wireless Encryption Standards:

WEP (Wired Equivalent Privacy):
The original, now completely broken and deprecated Wi-Fi encryption. It uses the weak RC4 stream cipher and has no relation to GCMP.

WPA (Wi-Fi Protected Access):
Introduced as a temporary replacement for WEP. It still primarily used the RC4 cipher with TKIP for integrity, not GCMP.

WPA2 (Wi-Fi Protected Access 2):
The long-standing standard. It uses AES-CCMP (Counter Mode Cipher Block Chaining Message Authentication Code Protocol) with 128-bit keys. This is different from the GCMP-256 specified in the scenario.

WPA3 (Wi-Fi Protected Access 3):
The current generation of Wi-Fi security. A major enhancement in WPA3 is the mandatory support for stronger cryptographic suites for enterprise networks, specifically WPA3-Enterprise 192-bit mode, which uses GCMP-256 for encryption and integrity.

Why the Other Options Are Incorrect:

A. WPA2 encryption:
WPA2 uses AES-CCMP-128, not GCMP-256. The protocol and the key strength are different.

C. WEP encryption:
WEP is an obsolete protocol that uses the insecure RC4 cipher and offers no meaningful security.

D. WPA encryption:
The original WPA standard used TKIP/RC4 and is also considered insecure and obsolete. It does not support modern modes like GCMP.

Reference:

Concept:
Wi-Fi Security Protocols (WPA3).

Core Principle:
The transition from WPA2 to WPA3 introduced stronger cryptographic algorithms to meet modern security demands. The use of GCMP-256 is a specific and defining feature of the more secure modes within the WPA3 standard, particularly in enterprise environments requiring the highest level of security.

RAT has been setup in one of the machines connected to the network to steal the important Sensitive corporate docs located on Desktop of the server, further investigation revealed the IP address of the server 20.20.10.26. Initiate a remote connection using thief client and determine the number of files present in the folder.
Hint: Thief folder is located at: Z:\CCT-Tools\CCT Module 01 Information Security Threats and Vulnerabilities\Remote Access Trojans (RAT)\Thief of Attacker Machine-1.


A. 2


B. 4


C. 3


D. 5





C.
  3

Nicolas, a computer science student, decided to create a guest OS on his laptop for different lab operations. He adopted a virtualization approach in which the guest OS will not be aware that it is running in a virtualized environment. The virtual machine manager (VMM) will directly interact with the computer hardware, translate commands to binary instructions, and forward them to the host OS. Which of the following virtualization approaches has Nicolas adopted in the above scenario?


A. Hardware-assisted virtualization


B. Full virtualization


C. Hybrid virtualization


D. OS-assisted virtualization





A.
  Hardware-assisted virtualization

Explanation: Hardware-assisted virtualization is a virtualization approach in which the guest OS will not be aware that it is running in a virtualized environment. The virtual machine manager (VMM) willdirectly interact with the computer hardware, translate commands to binary instructions, and forward them to the host OS. Hardware-assisted virtualization relies on special hardware features in the CPU and chipset to create and manage virtual machines efficiently and securely34. Full virtualization is a virtualization approach in which the guest OS will not be aware that it is running in a virtualized environment, but the VMM will run in software and emulate all the hardware resources for each virtual machine5. Hybrid virtualization is a virtualization approach that combines hardware-assisted and full virtualization techniques to optimize performance and compatibility6. OS-assisted virtualization is a virtualization approach in which the guest OS will be modified to run in a virtualized environment and cooperate with the VMM to access the hardware resources.

An FTP server has been hosted in one of the machines in the network. Using Cain and Abel the attacker was able to poison the machine and fetch the FTP credentials used by the admin. You're given a task to validate the credentials that were stolen using Cain and Abel and read the file flag.txt


A. white@hat


B. red@hat


C. hat@red


D. blue@hat





C.
  hat@red

DigitalVault Corp., a premier financial institution, has recently seen a significant rise in advanced persistent threats (APTs)targetlng Its mainframe systems. Considering the sensitivity of the data stored, It wants to employ a strategy that deceives attackers into revealing their techniques. As part of its defense strategy, the cybersecurity team is deliberating over-deploying a honeypot system. Given the bank's requirements, the team are evaluating different types of honeypots. DigitalVault's primary goal Is to gather extensive Information about the attackers' methods without putting its actual systems at risk. Which of the following honeypots would BEST serve DigitalVault’s intent?


A. High-interaction honeypots, offering a real system's replica for attackers, and observing their every move.


B. Low-interaction honeypots, designed to log basic information such as IP addresses and attack vectors.


C. Reserch honeypots, aimed at understanding threats to a specific industry and sharing insights with the broader community.


D. Production honeypots, which are part of the organization's active network and collect information about dally attacks.





A.
  High-interaction honeypots, offering a real system's replica for attackers, and observing their every move.


Page 2 out of 14 Pages
Previous