CAS-005 Practice Test Questions

103 Questions


A network engineer must ensure that always-on VPN access is enabled Curt restricted to company assets Which of the following best describes what the engineer needs to do''


A. Generate device certificates using the specific template settings needed


B. Modify signing certificates in order to support IKE version 2


C. Create a wildcard certificate for connections from public networks


D. Add the VPN hostname as a SAN entry on the root certificate





A.
   Generate device certificates using the specific template settings needed

Explanation:

Why A is Correct:
The requirement has two key parts:

Always-on VPN:
This means the VPN connection is established automatically, typically at device startup or user logon, without user interaction.

Restricted to company assets:
This means only devices that are owned and managed by the company should be able to connect.

The best way to meet both requirements is through device certificate authentication. In this model:

Each company-issued device is provisioned with a unique device certificate issued by the company's own Private Public Key Infrastructure (PKI).

The VPN gateway is configured to only accept connection attempts that present a valid certificate from this specific PKI.

The "always-on" feature can be configured to use this certificate for automatic authentication without requiring user input.

This effectively restricts access to devices that possess this certificate (i.e., company assets). Non-company devices will lack the required certificate and be unable to connect.

The network engineer would need to ensure the certificate templates in the PKI are configured correctly to issue certificates with the necessary properties (e.g., client authentication EKU) for this purpose.

Why B is Incorrect:
Modifying signing certificates for IKEv2 relates to the cryptographic negotiation of the VPN tunnel itself. While IKEv2 is a common protocol that supports certificate authentication, this option does not address the core requirement of restricting access to company assets. It is a step in configuring the protocol, not the access control method.

Why C is Incorrect:
A wildcard certificate is used to secure multiple subdomains under a single domain name (e.g., *.example.com). It is used for TLS/SSL encryption for web services, not for client device authentication. Using a wildcard certificate for VPN clients would be a major security anti-pattern, as the same certificate would be on every device, making it impossible to distinguish or revoke individual devices. It violates the principle of unique device identity.

Why D is Incorrect:
Adding the VPN hostname as a Subject Alternative Name (SAN) on the root certificate is incorrect and nonsensical. The root certificate is the top-level, trusted anchor of a PKI hierarchy and should be kept offline and secure. Server certificates (not root certificates) for the VPN gateway itself contain the SAN field to list the DNS names they are valid for (e.g., vpn.company.com). This is important for ensuring clients are connecting to the legitimate server but does nothing to authenticate or restrict the client devices that are connecting.

Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It tests the practical application of PKI and certificate-based authentication to achieve specific security goals like device compliance and automated access in a zero-trust framework.

A security analyst received a notification from a cloud service provider regarding an attack detected on a web server The cloud service provider shared the following information about the attack:

• The attack came from inside the network.

• The attacking source IP was from the internal vulnerability scanners.

• The scanner is not configured to target the cloud servers.

Which of the following actions should the security analyst take first?


A. Create an allow list for the vulnerability scanner IPs m order to avoid false positives


B. Configure the scan policy to avoid targeting an out-of-scope host


C. Set network behavior analysis rules


D. Quarantine the scanner sensor to perform a forensic analysis





D.
  Quarantine the scanner sensor to perform a forensic analysis

Explanation:

Why D is Correct:
The scenario describes a highly anomalous and potentially severe situation. The key clues are:

The attack came from an internal IP address assigned to a vulnerability scanner.

The scanner is not configured to target the cloud servers.

This indicates the scanner itself is likely compromised. An attacker has likely gained control of the vulnerability scanner and is using its capabilities, permissions, and internal network position to launch attacks against other systems (in this case, cloud servers).

The first and most critical action is to contain the threat. Quarantining the scanner sensor immediately isolates it from the network, preventing it from causing further damage or being used to pivot to other systems. After containment, a forensic analysis is required to determine how it was compromised, what the attacker did, and what data might have been accessed. This is an incident response priority.

Why A is Incorrect:
Creating an allow list for the scanner's IP would be a disastrous action. It would effectively tell the security systems to ignore all malicious activity originating from the compromised scanner, allowing the attacker to operate with impunity. This is the opposite of what should be done.

Why B is Incorrect:
Reconfiguring the scan policy is a corrective action for a misconfiguration. The problem is not a misconfiguration; the problem is that the scanner itself is behaving maliciously against its configuration. This implies the scanner is under external control, making reconfiguration irrelevant until the device itself is investigated and secured.

Why C is Incorrect:
Setting network behavior analysis rules is a good proactive measure for detecting anomalies in the future. However, the attack has already been detected. This is a reactive incident response scenario, and the immediate priority is to stop the active attack, not to create new detection rules. This can be done after the compromised system is contained.

Reference:
This question falls under Domain 2.0: Security Operations, specifically focusing on incident response procedures. It tests the understanding of the incident response lifecycle, where the first steps are always to contain and then eradicate a threat. The anomalous behavior of a trusted security tool is a major red flag that indicates a compromise, requiring immediate isolation.

A company isolated its OT systems from other areas of the corporate network These systems are required to report usage information over the internet to the vendor Which oi the following b*st reduces the risk of compromise or sabotage' (Select two).


A. Implementing allow lists


B. Monitoring network behavior


C. Encrypting data at rest


D. Performing boot Integrity checks


E. Executing daily health checks


F. Implementing a site-to-site IPSec VPN





A.
  Implementing allow lists

F.
   Implementing a site-to-site IPSec VPN

Explanation:
The scenario involves Operational Technology (OT) systems (e.g., industrial control systems, SCADA) that are isolated from the corporate network but must send usage data to an external vendor over the internet. The goal is to reduce the risk of compromise or sabotage.

Why A is Correct (Implementing allow lists):
For OT systems, which often have known and fixed behavior, allow lists (whitelisting) are a highly effective security control.

Network Allow Lists:
At the firewall, configure rules to only allow the OT systems to communicate with the specific vendor IP addresses and ports required for reporting. Block all other outbound and inbound traffic. This drastically reduces the attack surface.

Application/Execution Allow Lists:
On the OT systems themselves, use application allow listing to prevent unauthorized software from executing, which is a key defense against malware.

Why F is Correct (Implementing a site-to-site IPSec VPN):
The requirement is to send data "over the internet." Transmitting this data in cleartext would expose it to interception and potentially allow for sabotage (e.g., malicious injection of false commands or data). An IPSec VPN creates an encrypted tunnel between the OT network and the vendor's network.

This ensures the confidentiality and integrity of the data in transit, protecting it from eavesdropping or modification.

It can also provide mutual authentication, ensuring the OT systems are only talking to the legitimate vendor and not an impersonator.

Why the Other Options Are Incorrect:

B. Monitoring network behavior:
While important, this is a detective control, not a preventive one. It can help you discover an attack in progress but does nothing to reduce the risk of the initial compromise or sabotage. Prevention (allow lists, encryption) is prioritized over pure detection in this context.

C. Encrypting data at rest:
This protects data stored on the OT systems. The primary risk described is related to data being transmitted over the internet to the vendor. Data at rest encryption does not address the network transmission risk.

D. Performing boot integrity checks:
This (e.g., using UEFI Secure Boot) ensures that a system boots using only trusted software. It's a great control for preventing persistent low-level malware. However, it does not secure the network pathway to the vendor, which is the explicit vulnerability in the scenario.

E. Executing daily health checks:
This is an operational maintenance task. Like monitoring, it can help identify problems but is not a direct security control that mitigates the risk of external network-based compromise or sabotage during data transmission.

Reference:
This question falls under Domain 1.0: Security Architecture, specifically covering secure network design for specialized environments like OT/ICS. It tests the knowledge of applying fundamental security principles (least privilege via allow lists, securing data in transit via VPNs) to a high-stakes scenario.

Company A acquired Company B and needs to determine how the acquisition will impact the attack surface of the organization as a whole. Which of the following is the best way to achieve this goal? (Select two). Implementing DLP controls preventing sensitive data from leaving Company B's network


A. Documenting third-party connections used by Company B


B. Reviewing the privacy policies currently adopted by Company B


C. Requiring data sensitivity labeling tor all files shared with Company B


D. Forcing a password reset requiring more stringent passwords for users on Company B's network


E. Performing an architectural review of Company B's network





A.
  Documenting third-party connections used by Company B

E.
  Performing an architectural review of Company B's network

Explanation:
The goal is to understand how the acquisition impacts the overall attack surface. The attack surface is the sum of all potential vulnerabilities and entry points an attacker could exploit. Company A needs to discover and assess all the new components Company B is bringing into the organization.

Why E is Correct (Performing an architectural review of Company B's network):
This is the most comprehensive and direct method to understand the new attack surface. An architectural review would involve mapping:

Network segments and trust relationships.

Internet-facing assets (web servers, VPN gateways).

Internal critical servers and databases

Security control chokepoints (firewalls, IDS/IPS).

Cloud environments and SaaS applications used.

This review provides a complete picture of the technical attack surface being acquired.

Why A is Correct (Documenting third-party connections used by Company B):
Third-party connections (e.g., vendor VPNs, API integrations, supply chain links) are a major and often overlooked part of an organization's attack surface. A breach at a third party can easily become a breach at Company B, and now Company A. Documenting these connections is crucial for understanding:

What external entities have access to the network.

The scope of that access.

The security posture of those third parties.

This reveals the external supply chain and partnership aspect of the attack surface.

Why the Other Options Are Incorrect:

Implementing DLP controls...
This is a remediation or risk mitigation action, not an assessment action. The question asks for the best way to determine the impact on the attack surface (i.e., to assess and discover), not to immediately fix it. You must first understand the surface before you can protect it.

B. Reviewing the privacy policies...
While important for GDPR/CCPA compliance and understanding data handling practices, privacy policies are high-level documents. They do not provide the technical details needed to map specific vulnerabilities, entry points, or network connections that constitute an attack surface.

C. Requiring data sensitivity labeling...
This is another remediation control for data governance and protection (likely to be done after the assessment). It does not help in discovering what the attack surface is; it helps in protecting the data once the landscape is understood.

D. Forcing a password reset...
This is a specific hardening technique for credential security. It addresses one very specific potential vulnerability but does nothing to reveal the entirety of the new network architecture, applications, and third-party connections that Company A is now responsible for.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 4.0: Governance, Risk, and Compliance. It tests the process of security due diligence during a merger & acquisition (M&A), focusing on the critical steps of discovery and assessment to understand risk exposure. The first steps are always to assess the architecture and document connections.

Users must accept the terms presented in a captive petal when connecting to a guest network. Recently, users have reported that they are unable to access the Internet after joining the network A network engineer observes the following:

• Users should be redirected to the captive portal.

• The Motive portal runs Tl. S 1 2

• Newer browser versions encounter security errors that cannot be bypassed

• Certain websites cause unexpected re directs

Which of the following mow likely explains this behavior?


A. The TLS ciphers supported by the captive portal ate deprecated


B. Employment of the HSTS setting is proliferating rapidly.


C. Allowed traffic rules are causing the NIPS to drop legitimate traffic


D. An attacker is redirecting supplicants to an evil twin WLAN.





B.
  Employment of the HSTS setting is proliferating rapidly.

Explanation:
The symptoms point directly to a problem with the captive portal's security configuration interacting with modern browser security features:

The Problem:
Users can't access the internet because they aren't reaching the captive portal. Newer browsers show security errors that cannot be bypassed.

The Key Clue:
The captive portal runs TLS 1.2. This is a secure protocol, but the issue isn't the protocol version itself.

The Root Cause:
HTTP Strict Transport Security (HSTS) is a web security policy mechanism that forces a web browser to interact with a website only over secure HTTPS connections. Crucially, it tells browsers to never allow a user to bypass certificate warnings.

A captive portal works by intercepting HTTP requests and redirecting them to an HTTP portal page. This interception often uses a self-signed or non-public certificate to perform the HTTPS redirect, which browsers with HSTS preloads or policies for common sites will reject outright, with no option for the user to proceed.

The observation that "certain websites cause unexpected redirects" aligns with HSTS; a browser that has an HSTS policy for example.com will refuse to connect to a captive portal that is trying to intercept and redirect that specific traffic because it cannot verify the portal's certificate authority.

Why B is Correct:
The widespread adoption and preloading of HSTS by major websites (its "proliferation") is the most likely reason that a previously working captive portal is now failing. Modern browsers are becoming increasingly strict about enforcing HSTS policies, making traditional captive portal techniques obsolete.

Why the Other Options Are Incorrect:

A. The TLS ciphers supported by the captive portal are deprecated:
While deprecated ciphers can cause errors, these errors are usually more descriptive and often can be bypassed by the user. The fact that the errors cannot be bypassed is the critical detail that points to HSTS enforcement, not a weak cipher.

C. Allowed traffic rules are causing the NIPS to drop legitimate traffic:
A Network Intrusion Prevention System (NIPS) dropping traffic could prevent access, but it would not cause security errors in the browser. The browser error indicates a TLS/SSL handshake or certificate trust issue between the client and the portal, not a silent packet drop by a network device.

D. An attacker is redirecting supplicants to an evil twin WLAN:
An evil twin attack could explain redirects and lack of access. However, it would not explain the specific symptom of security errors that cannot be bypassed in newer browsers. An evil twin would likely present a login page that mimics the real one, not a browser-level security error that blocks the page from loading entirely.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 3.0: Security Engineering and Cryptography. It tests the understanding of web security mechanisms (HSTS) and their real-world impact on network services like captive portals, requiring architects to adapt designs to evolving security standards.

A security review revealed that not all of the client proxy traffic is being captured. Which of the following architectural changes best enables the capture of traffic for analysis?


A. Adding an additional proxy server to each segmented VLAN


B. Setting up a reverse proxy for client logging at the gateway


C. Configuring a span port on the perimeter firewall to ingest logs


D. Enabling client device logging and system event auditing





C.
  Configuring a span port on the perimeter firewall to ingest logs

Explanation:

Why C is Correct:
The goal is to capture client proxy traffic that is currently being missed. The most efficient and comprehensive way to capture network traffic for analysis is at a central chokepoint through which all traffic flows.

The perimeter firewall is such a chokepoint, as all traffic between the internal network and the internet must pass through it.

A SPAN port (Switch Port Analyzer) or mirror port on a network device (like a firewall or core switch) is specifically designed for this purpose. It copies all network packets seen on a source port (or entire VLAN) and sends them to a destination port where a monitoring tool (like a packet analyzer, IDS, or SIEM) is connected.

By configuring a SPAN port on the perimeter firewall, the security team can get a complete copy of all inbound and outbound traffic, ensuring no client proxy traffic is missed, regardless of which proxy server it was supposed to go to.

Why A is Incorrect:
Adding more proxy servers increases the points of failure and management complexity. If traffic isn't being captured now, it's likely because clients are bypassing the proxy or there's a misconfiguration. Adding more proxies doesn't guarantee all traffic will be forced through them. A SPAN port captures traffic regardless of whether it goes through a proxy or not.

Why B is Incorrect:
A reverse proxy is placed in front of servers (e.g., web servers) to handle incoming requests for them. It is used for load balancing, SSL termination, and security for servers. It is not used for logging outbound client traffic to the internet, which is the function of a forward proxy. This solution is aimed at the wrong direction of traffic flow.

Why D is Incorrect:
Enabling logging on client devices is a host-based solution. While it can provide valuable data, it is:

Highly inefficient:
It requires configuring and collecting logs from every single endpoint.

Less reliable:
Logs can be tampered with if a device is compromised.

Not comprehensive:
It may not capture the full network traffic data needed for deep analysis.

This approach is cumbersome and does not scale as well as a network-based solution like a SPAN port.

Reference:
This question falls under Domain 2.0: Security Operations and Domain 1.0: Security Architecture. It tests the knowledge of network monitoring techniques and the appropriate architectural solutions for gaining visibility into network traffic. Using SPAN ports for packet capture is a fundamental method for traffic analysis and intrusion detection.

The identity and access management team is sending logs to the SIEM for continuous monitoring. The deployed log collector is forwarding logs to the SIEM. However, only false positive alerts are being generated. Which of the following is the most likely reason for the inaccurate alerts?


A. The compute resources are insufficient to support the SIEM


B. The SIEM indexes are 100 large


C. The data is not being properly parsed


D. The retention policy is not property configured





C.
  The data is not being properly parsed

Explanation:

Why C is Correct:
The core function of a SIEM is to analyze log data to generate accurate alerts. This process relies heavily on parsing, which is the mechanism that takes raw log data and breaks it down into structured, meaningful fields (e.g., extracting the username, source IP, timestamp, and event outcome from an authentication log).

If the data is not being parsed correctly, the SIEM cannot understand the content of the logs.

This misunderstanding leads to the SIEM's correlation rules and analytics engines applying logic to the wrong data fields, resulting in nonsensical or false positive alerts.

For example, if a rule is designed to alert on 10 failed login attempts from a user, but the "username" field is empty due to a parsing error, the rule might trigger on every single failed login event, creating a massive flood of false positives.

Why A is Incorrect:
While insufficient compute resources can cause performance issues (like slow alerting or dropped logs), they do not directly cause the content of the alerts to be inaccurate. Performance issues might delay or prevent alerts, but they don't systematically transform valid events into false positives.

Why B is Incorrect:
Large SIEM indexes are primarily a storage and performance concern. They might make searches slower, but they do not cause the underlying correlation logic to become incorrect and generate false positives. The issue is with data interpretation (parsing), not data volume.

Why D is Incorrect:
A misconfigured retention policy governs how long data is stored in the SIEM. It has no impact on the accuracy of the alerts being generated in real-time. It only affects how far back you can search for historical data.

Reference:
This question falls under Domain 2.0: Security Operations. It tests practical knowledge of SIEM deployment and management, specifically the critical troubleshooting step of verifying that data sources are being properly parsed and normalized. This is a common and fundamental issue when onboarding new log sources to a SIEM.

An organization wants to implement a platform to better identify which specific assets are affected by a given vulnerability. Which of the following components provides the best foundation to achieve this goal?


A. SASE


B. CMDB


C. SBoM


D. SLM





B.
  CMDB

Explanation:

Why B is Correct:
A Configuration Management Database (CMDB) is a centralized repository that acts as a "single source of truth" for an organization's IT assets (hardware, software, and their relationships). Its primary purpose is to provide detailed information about configuration items (CIs), including:

Software versions installed on specific servers and workstations.

Hardware specifications and components.

Ownership and location of assets.

Dependencies between systems.

When a new vulnerability is published (e.g., a specific version of OpenSSL is vulnerable), the security team can query the CMDB to instantly identify all assets that have that specific software version installed. This allows for precise impact assessment and targeted remediation efforts.

Why A is Incorrect:
Secure Access Service Edge (SASE) is a network architecture that combines security and networking capabilities (like SWG, CASB, ZTNA) into a cloud-based service. It is focused on securing access to applications and data for users, not on maintaining an inventory of assets for vulnerability management.

Why C is Incorrect:
A Software Bill of Materials (SBoM) is a nested inventory for a single software application, listing all its components and dependencies. It is excellent for understanding vulnerabilities within a specific application but does not provide an organization-wide view of which assets have that application installed. A CMDB would contain or reference SBoMs for the software installed on its recorded assets.

Why D is Incorrect:
Service Level Management (SLM) is the process of defining, measuring, and managing the quality of IT services against agreed-upon targets with customers (e.g., 99.9% uptime). It is a business and operational process focused on service quality and performance, not a technical database for asset inventory and vulnerability mapping.

Reference:
This question falls under Domain 2.0: Security Operations and Domain 4.0: Governance, Risk, and Compliance. It tests the knowledge of key IT infrastructure components and their application in vulnerability management. The CMDB is a foundational element of IT Service Management (ITSM) frameworks like ITIL and is critical for effective security operations.

Which of the following best explains the importance of determining organization risk appetite when operating with a constrained budget?


A. Risk appetite directly impacts acceptance of high-impact low-likelihood events


B. Organizational risk appetite varies from organization to organization


C. Budgetary pressure drives risk mitigation planning in all companies


D. Risk appetite directly influences which breaches are disclosed publicly





A.
  Risk appetite directly impacts acceptance of high-impact low-likelihood events

Explanation:

Why A is Correct:
Risk appetite defines the amount and type of risk an organization is willing to accept in pursuit of its objectives. When operating with a constrained budget, it is impossible to mitigate all risks. Therefore, the organization must make strategic decisions about where to allocate its limited funds.

High-impact, low-likelihood events (e.g., a major natural disaster, a sophisticated cyberattack) are often extremely expensive to fully mitigate.

A well-defined risk appetite allows leadership to consciously decide to accept certain of these risks because the cost of mitigation outweighs the potential loss, or the likelihood is deemed too remote to justify the investment.

This enables the organization to focus its constrained budget on mitigating higher-likelihood or more severe risks that fall outside its risk appetite, ensuring resources are used most effectively.

Why B is Incorrect:
While it is true that risk appetite varies between organizations, this statement is merely a descriptive fact. It does not explain why determining it is important for budgetary decisions. The question asks for the "importance" of determining it in a specific context (constrained budget), not just a characteristic of it.

Why C is Incorrect:
Budgetary pressure does not "drive risk mitigation planning in all companies"; it constrains it. The entire premise of the question is that the budget is limited, so the organization cannot do everything. Risk appetite is the tool that guides how to plan effectively under that pressure, but the pressure itself is not the explanation for the importance of risk appetite.

Why D is Incorrect:
The decision to publicly disclose a breach is governed by legal, regulatory, and contractual obligations (e.g., laws in all 50 US states, GDPR, SEC rules). While risk appetite might influence an organization's overall cybersecurity posture, it does not "directly influence" breach disclosure decisions in the way legal mandates do. This is a distractor unrelated to the core function of risk appetite in budgetary prioritization.

Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests the practical application of risk management concepts, specifically how a defined risk appetite is used to make informed, strategic decisions about resource allocation when it is impossible to address all risks. This is a key responsibility of senior security leadership.

Configure a scheduled task nightly to save the logs


A. Configure a scheduled task nightly to save the logs


B. Configure event-based triggers to export the logs at a threshold.


C. Configure the SIEM to aggregate the logs


D. Configure a Python script to move the logs into a SQL database.





B.
  Configure event-based triggers to export the logs at a threshold.

Explanation:
The question is incomplete as it does not specify the exact goal or scenario. However, based on the options provided and the context of log management, the most efficient and proactive approach for ensuring critical logs are saved or exported in a timely manner—especially for security monitoring—is to use event-based triggers.

Why B is Correct:
Event-based triggers allow for immediate action when specific conditions are met (e.g., when log entries match a certain pattern, such as a security event like multiple failed login attempts, a privilege escalation attempt, or a known attack signature).

Exporting logs at a threshold ensures that if the number of events exceeds a predefined limit (e.g., 10 failed login attempts in 5 minutes), the logs are automatically exported or alerted upon. This is crucial for:

Real-time response:
Security teams can immediately investigate and respond to potential threats.

Efficiency:
Avoids storing or processing large volumes of irrelevant logs; only exports logs when necessary.

Proactive monitoring:
Helps in capturing critical events as they occur, rather than waiting for a nightly batch job.

Why A is Incorrect:

Scheduled task nightly:
This is a passive approach. Logs are saved only once per day, which means critical events might be missed or overwritten if the log buffer cycles before the scheduled task runs. This delays response and could lead to loss of crucial forensic data.

Why C is Incorrect:

Configuring the SIEM to aggregate logs:
While SIEMs are excellent for aggregating and correlating logs, this option does not explicitly address the need to export or save logs based on specific conditions. Aggregation alone does not ensure that critical logs are preserved or triggered for action.

Why D is Incorrect:
Configuring a Python script to move logs to a SQL database:
This is a custom solution that might work, but it is less efficient and more error-prone compared to built-in event-based triggers. It requires maintenance, debugging, and might not integrate seamlessly with existing log management systems. Moreover, it does not specify when or why the logs are moved—it could be scheduled (like option A) rather than triggered by events.

Conclusion:
For security-sensitive environments, event-based triggers (option B) are the best practice because they enable immediate and conditional export of logs based on real-time events, ensuring rapid response and efficient log management.

Reference:
This aligns with Domain 2.0: Security Operations, particularly log management and real-time monitoring strategies.

An organization is required to

* Respond to internal and external inquiries in a timely manner

* Provide transparency.

* Comply with regulatory requirements

The organization has not experienced any reportable breaches but wants to be prepared if a breach occurs in the future. Which of the following is the best way for the organization to prepare?


A. Outsourcing the handling of necessary regulatory filing to an external consultant


B. Integrating automated response mechanisms into the data subject access request process


C. Developing communication templates that have been vetted by internal and external counsel


D. Conducting lessons-learned activities and integrating observations into the crisis management plan





C.
  Developing communication templates that have been vetted by internal and external counsel

Explanation:
The organization's requirements are to respond timely, provide transparency, and ensure compliance in the event of a breach. The best way to prepare for a potential future breach is to have pre-approved communication plans ready.

Why C is Correct:
Developing communication templates (e.g., breach notifications to regulators, customers, and partners) in advance, and having them vetted by legal experts (internal and external counsel), directly addresses all three requirements:

Timely Response:
Pre-written templates allow the organization to act quickly instead of scrambling to draft communications under pressure during a crisis.

Transparency:
Templates ensure consistent and clear messaging that meets expectations for openness.

Compliance:
Legal vetting ensures the communications satisfy all regulatory requirements (e.g., GDPR, CCPA, HIPAA) for content and timing of notifications.

This is a proactive measure that prepares the organization for efficient and compliant breach response.

Why the Other Options Are Incorrect:

A. Outsourcing regulatory filing to an external consultant:
While consultants can be helpful, outsourcing critical functions like regulatory filing may not ensure timely or transparent response if the consultant is not fully integrated with the organization's operations. It also does not address the need for internal preparedness and may lead to delays if the consultant is not immediately available during a breach.

B. Integrating automated response mechanisms into the data subject access request process:
This focuses on handling individual data subject requests (e.g., "right to be forgotten" requests). While important for privacy compliance, it is not directly related to breach response and communication. Breach response requires broad notifications, not automated handling of individual requests.

D. Conducting lessons-learned activities and integrating observations into the crisis management plan:
Lessons-learned activities are reactive—they occur after an incident. The organization has not experienced any breaches yet, so there are no lessons to learn from. While updating crisis plans is good practice, it is not as directly actionable as having pre-approved communication templates ready for immediate use.

Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests the understanding of incident response preparedness, specifically the importance of pre-planning communications to meet legal and regulatory obligations efficiently during a high-stress event like a data breach.

An organization mat performs real-time financial processing is implementing a new backup solution Given the following business requirements?

* The backup solution must reduce the risk for potential backup compromise

* The backup solution must be resilient to a ransomware attack.

* The time to restore from backups is less important than the backup data integrity

* Multiple copies of production data must be maintained

Which of the following backup strategies best meets these requirement?


A. Creating a secondary, immutable storage array and updating it with live data on a continuous basis


B. Utilizing two connected storage arrays and ensuring the arrays constantly sync


C. Enabling remote journaling on the databases to ensure real-time transactions are mirrored


D. Setting up antitempering on the databases to ensure data cannot be changed unintentionally





A.
  Creating a secondary, immutable storage array and updating it with live data on a continuous basis

Explanation:

Let's evaluate how option A meets each business requirement:

Reduce the risk for potential backup compromise:
Immutable storage means the data cannot be altered or deleted for a specified retention period. This prevents attackers (or malware like ransomware) from encrypting, corrupting, or deleting the backups, thus significantly reducing the risk of backup compromise.

Resilient to a ransomware attack:
Since the backup data is immutable, even if production systems are encrypted by ransomware, the backups remain untouched and can be used for restoration.

Time to restore is less important than data integrity:
Continuous updates ensure the backup is always current, but the focus on immutability prioritizes data integrity (ensuring backups are clean and unaltered) over fast restoration (which might be slower due to the immutable nature).

Multiple copies of production data must be maintained:
The immutable storage array serves as a secondary, protected copy of production data. This can be combined with other copies (e.g., on-premises and off-site) to meet the multiple-copies requirement.

Why the Other Options Are Incorrect:

B) Utilizing two connected storage arrays and ensuring the arrays constantly sync:
While this provides real-time replication and multiple copies, it does not protect against ransomware. If ransomware encrypts production data, the encryption will be immediately synced to the secondary array, corrupting both copies. It lacks immutability.

C) Enabling remote journaling on the databases to ensure real-time transactions are mirrored:
Journaling mirrors transactions in real-time, which is good for data currency but offers no protection against ransomware or malicious alterations. Journaled data can still be encrypted or corrupted if the primary system is compromised.

D) Setting up anti-tampering on the databases to ensure data cannot be changed unintentionally:
Anti-tampering measures (e.g., write-once-read-many or integrity monitoring) might protect the production database to some extent, but they do not address the need for multiple backup copies. Additionally, if ransomware gains privileged access, it could potentially bypass these controls. This option does not focus on backup resilience.

Conclusion:
The immutable backup solution (option A) is the only strategy that effectively addresses all requirements by ensuring backup data cannot be compromised, is resilient to ransomware, prioritizes integrity over restore time, and maintains a secondary copy of production data.

Reference:
This aligns with Domain 2.0: Security Operations and Domain 4.0: Governance, Risk, and Compliance, particularly disaster recovery and backup strategies designed to withstand modern threats like ransomware. Immutable backups are a industry best practice for ensuring data recoverability.


Page 3 out of 9 Pages
Previous