Free CAS-005 Practice Test Questions 2026

209 Questions


Last Updated On : 7-Apr-2026


A company's SICM Is continuously reporting false positives and false negatives The security operations team has Implemented configuration changes to troubleshoot possible reporting errors Which of the following sources of information best supports the required analysts process? (Select two).


A. Third-party reports and logs


B. Trends


C. Dashboards


D. Alert failures


E. Network traffic summaries


F. Manual review processes





B.
  Trends

D.
  Alert failures

Explanation:
The Security Information and Event Management (SIEM) system is generating both false positives (incorrect alerts) and false negatives (missed detections). The team needs to analyze the root cause of these inaccuracies. The following sources are most critical for this diagnostic process:

Why B (Trends) is Correct:
Analyzing trends in SIEM data over time is essential for identifying patterns that cause false positives/negatives. For example: A trend might show that false positives spike during certain hours (e.g., during backup jobs) or from specific network segments, indicating a need for tuning rules to exclude normal activity.

Trends can reveal whether false negatives are increasing, suggesting a gap in detection coverage or a change in the threat landscape that existing rules don't address. Trends provide historical context to pinpoint when and where the SIEM's reporting accuracy degraded.

Why D (Alert failures) is Correct:
Alert failures refer to logs or metrics specifically about the SIEM's own performance—e.g., alerts that were triggered but shouldn't have been (false positives) or events that should have triggered alerts but didn't (false negatives). Analyzing these failures directly helps:

Identify which specific correlation rules are misconfigured or overly broad.

Determine if data sources are not feeding logs correctly (causing false negatives).

Adjust thresholds and logic to reduce noise and improve detection rates.

This is the most direct source of information for troubleshooting SIEM accuracy issues.

Why the Other Options Are Incorrect:

A (Third-party reports and logs):
While useful for external threat intelligence, these don't directly help diagnose internal SIEM configuration errors. They might add context but aren't primary sources for troubleshooting reporting accuracy.

C (Dashboards):
Dashboards visualize data (including trends and alerts) but are not a source of information themselves. They rely on underlying data (like trends and alert failures) to be useful. The team needs raw data for analysis, not just summaries.

E (Network traffic summaries):
These provide insight into network activity but won't directly explain why the SIEM is generating false alerts. The issue likely lies in SIEM rule logic or data parsing, not network traffic patterns.

F (Manual review processes):
Manual reviews are a method for analysis, not a source of information. The team needs data sources (like trends and alert failures) to conduct these reviews effectively.

Reference:
This aligns with Domain 2.0: Security Operations, specifically SIEM management and tuning. Effective troubleshooting requires analyzing historical trends and direct alert failures to refine detection rules and improve accuracy.

A company wants to use loT devices to manage and monitor thermostats at all facilities The thermostats must receive vendor security updates and limit access to other devices within the organization Which of the following best addresses the company's requirements''


A. Only allowing Internet access to a set of specific domains


B. Operating lot devices on a separate network with no access to other devices internally


C. Only allowing operation for loT devices during a specified time window


D. Configuring IoT devices to always allow automatic updates





B.
   Operating lot devices on a separate network with no access to other devices internally

Explanation:

The requirements are:

Receive vendor security updates:
This requires internet access.

Limit access to other devices within the organization:
This requires strict network segmentation to prevent the IoT devices from communicating with internal corporate systems.

Option B best addresses both requirements:

Separate network:
Isolating IoT devices on a dedicated network (e.g., a VLAN) prevents them from accessing other internal devices, reducing the risk of lateral movement if compromised.

No internal access:
This explicitly blocks communication with other organizational devices, meeting the "limit access" requirement.

Internet access:
The separate network can still be configured to allow outbound internet access (e.g., to specific vendor domains for updates), fulfilling the update requirement without exposing the internal network.

Why the other options are insufficient:

A) Only allowing Internet access to a set of specific domains:
This might allow updates but does not inherently prevent the IoT devices from communicating with other internal devices. It lacks network segmentation.

C) Only allowing operation during a specified time window:
This does not address security updates or access control. It is an operational constraint, not a security measure.

D) Configuring IoT devices to always allow automatic updates:
This ensures updates are applied but does nothing to limit access to other internal devices. It ignores the segmentation requirement.

Reference:
This aligns with Domain 1.0: Security Architecture, specifically network segmentation strategies for IoT security. Isolating IoT devices is a best practice to mitigate risks while allowing necessary functionality.

Company A and Company D ate merging Company A's compliance reports indicate branch protections are not in place A security analyst needs to ensure that potential threats to the software development life cycle are addressed. Which of the following should me analyst cons


A. If developers are unable to promote to production


B. If DAST code is being stored to a single code repository


C. If DAST scans are routinely scheduled


D. If role-based training is deployed





A.
   If developers are unable to promote to production

Explanation:
The key concern is that branch protections are not in place. Branch protection is a critical security control in version control systems (like Git) that enforces rules for collaborative development and prevents unauthorized or risky changes from being merged into critical branches (e.g., main or production). Without branch protections, the software development lifecycle (SDLC) is vulnerable to threats such as:

Developers pushing directly to production without review.

Unreviewed code being merged, potentially introducing vulnerabilities.

Bypassing of required checks (e.g., testing, code scans).

The analyst should check if developers are unable to promote to production without going through proper controls (e.g., pull requests, approvals, automated tests). This directly addresses the lack of branch protections by ensuring that:

Code cannot be merged without peer review.

Required status checks (e.g., SAST/DAST scans) must pass before merging.

Only authorized personnel can approve changes to protected branches.

This mitigates threats like insider risks, accidental vulnerabilities, and compliance violations.

Why the other options are incorrect:

B) If DAST code is being stored to a single code repository:
Storing DAST code in a single repository is not inherently a threat; it might even be a best practice for consistency. This does not relate to branch protections or SDLC threats.

C) If DAST scans are routinely scheduled:
While DAST scans are important for security, scheduling them does not address the lack of branch protections. Branch protections enforce gateways for code promotion (e.g., requiring scans to pass before merge), not just the existence of scans.

D) If role-based training is deployed:
Training is valuable for awareness but does not enforce technical controls like branch protections. It is a administrative measure, not a direct mitigation for the technical gap identified.

Reference:
This aligns with Domain 2.0: Security Operations and Domain 4.0: Governance, Risk, and Compliance, focusing on secure SDLC practices. Branch protection is a key DevSecOps control to ensure code quality and security before deployment.

A hospital provides tablets to its medical staff to enable them to more quickly access and edit patients' charts. The hospital wants to ensure that if a tablet is Identified as lost or stolen and a remote command is issued, the risk of data loss can be mitigated within seconds. The tablets are configured as follows to meet hospital policy

• Full disk encryption is enabled

• "Always On" corporate VPN is enabled

• ef-use-backed keystore is enabled'ready.

• Wi-Fi 6 is configured with SAE.

• Location services is disabled.

•Application allow list is configured


A. Revoking the user certificates used for VPN and Wi-Fi access


B. Performing cryptographic obfuscation


C. Using geolocation to find the device


D. Configuring the application allow list to only per mil emergency calls


E. Returning on the device's solid-state media to zero





A.
  Revoking the user certificates used for VPN and Wi-Fi access

Explanation:
The hospital's goal is to mitigate the risk of data loss within seconds if a tablet is lost or stolen. The tablets are configured with several security controls, but the most immediate and effective action to prevent data access is to cut off the device's ability to connect to hospital resources and decrypt data.

Why A is Correct:
The tablets use certificates for authentication:

"Always On" corporate VPN:
This likely uses certificate-based authentication to establish a secure connection to the hospital network.

Wi-Fi 6 with SAE (Simultaneous Authentication of Equals):
SAE (used in WPA3) enhances security but may still rely on certificates for enterprise Wi-Fi access.

By revoking the user certificates (via the certificate authority/Certificate Revocation List), the tablet immediately loses:

VPN access:
It can no longer connect to the hospital network to access or transmit patient data.

Wi-Fi access:
It may be unable to join any trusted network (including the hospital Wi-Fi), limiting its internet connectivity.

This action effectively isolates the device and prevents data exfiltration or unauthorized access to hospital systems, mitigating data loss risk within seconds.

Why Other Options Are Incorrect:

B) Performing cryptographic obfuscation:
This is a proactive data protection technique, not a reactive measure. It doesn't work "within seconds" and isn't applicable for a lost device.

C) Using geolocation to find the device:
Location services are disabled (per the configuration), so this isn't feasible. Even if enabled, finding the device doesn't mitigate data loss; it only helps with recovery.

D) Configuring the application allow list:
This is a pre-existing configuration (already in place). It cannot be dynamically changed to "only permit emergency calls" in seconds for a lost device, and it doesn't prevent data decryption or network access.

E) Returning the device's solid-state media to zero:
This is a remote wipe command. While effective, it may not occur "within seconds" due to network latency or the device being offline. Additionally, full disk encryption (FDE) is already enabled, so the data is already protected at rest. Revoking certificates is faster and ensures the device cannot decrypt data or connect to networks even if the wipe is delayed.

Reference:
This aligns with Domain 3.0: Security Engineering and Cryptography (certificate management) and Domain 2.0: Security Operations (incident response). Revoking certificates is a near-instantaneous action to invalidate trust and access, making it the best choice for immediate risk mitigation.

A systems engineer is configuring a system baseline for servers that will provide email services. As part of the architecture design, the engineer needs to improve performance of the systems by using an access vector cache, facilitating mandatory access control and protecting against:

• Unauthorized reading and modification of data and programs

• Bypassing application security mechanisms

• Privilege escalation

• interference with other processes

Which of the following is the most appropriate for the engineer to deploy?


A. SELinux


B. Privileged access management


C. Self-encrypting disks


D. NIPS





A.
  SELinux

Explanation:

The requirements specify the need for:

Improving performance using an access vector cache:
This is a feature of Security-Enhanced Linux (SELinux) that caches access decisions to reduce overhead.

Facilitating mandatory access control (MAC):
SELinux implements MAC, which enforces security policies based on labels (e.g., types, roles) beyond traditional discretionary access control (DAC).

Protecting against:

Unauthorized reading/modification of data and programs:
SELinux confines processes to least privilege, preventing unauthorized access.

Bypassing application security mechanisms:
SELinux policies restrict applications to their intended behavior.

Privilege escalation:
SELinux limits the ability of processes to gain higher privileges.

Interference with other processes:
Isolation via SELinux domains prevents processes from affecting each other.

Why SELinux (A) is the most appropriate:

SELinux directly provides all the required features:
MAC, access vector cache (for performance), and protection against the listed threats through its policy enforcement.

Why other options are incorrect:

B) Privileged access management (PAM):
PAM focuses on managing and monitoring privileged accounts (e.g., sudo, admin logins) but does not provide system-wide MAC or an access vector cache.

C) Self-encrypting disks (SED):
SED protects data at rest via encryption but does not enforce process isolation, prevent privilege escalation, or use an access vector cache.

D) Network intrusion prevention system (NIPS):
NIPS monitors network traffic for threats but operates at the network layer, not the system level. It cannot enforce MAC or protect against local process interference.

Reference:
This aligns with Domain 1.0: Security Architecture (system hardening) and Domain 3.0: Security Engineering and Cryptography (access controls). SELinux is a standard for enforcing least privilege and MAC on Linux systems, making it ideal for securing email servers.

Emails that the marketing department is sending to customers are pomp to the customers' spam folders. The security team is investigating the issue and discovers that the certificates used by the email server were reissued, but DNS records had not been updated. Which of the following should the security team update in order to fix this issue? (Select three.)


A. DMARC


B. SPF


C. DKIM


D. DNSSEC


E. SASC


F. SAN


G. SOA


H. MX





A.
  DMARC

B.
  SPF

C.
  DKIM

Explanation:
The issue is that marketing emails are being marked as spam due to a certificate reissue and outdated DNS records. This strongly indicates a problem with email authentication mechanisms that rely on DNS records. The core protocols for email authentication are:

SPF (Sender Policy Framework):
Uses a DNS TXT record to list all IP addresses authorized to send email for a domain. If the email server's IP changed or the record is incorrect, SPF validation will fail.

DKIM (DomainKeys Identified Mail):
Uses a DNS TXT record to publish a public key for verifying an email's digital signature. If the email server's DKIM signing key was reissued (e.g., a new certificate/key pair generated), the corresponding DKIM DNS record must be updated with the new public key. This is the most likely direct cause given the certificate reissue.

DMARC (Domain-based Message Authentication, Reporting, and Conformance):
Uses a DNS TXT record to specify how receivers should handle emails that fail SPF or DKIM (e.g., quarantine or reject). It also relies on the correct configuration of SPF and DKIM. Updating DMARC policies might be necessary if the failure is due to a strict policy (e.g., p=reject).

Why these three?
The certificate reissue likely affected the DKIM signing key. If the DKIM DNS record wasn't updated with the new public key, emails will fail DKIM validation. This, in turn, may cause DMARC failure if the policy requires DKIM alignment. SPF might also need updating if the mail server's IP or hostname changed.

Why the others are incorrect:

D. DNSSEC:
Used to cryptographically sign DNS records for authenticity, but it is not directly related to email authentication. It wouldn't cause emails to go to spam if disabled or misconfigured.

E. SASC:
Not a standard DNS or email protocol (likely a distractor).

F. SAN (Subject Alternative Name):
Part of an X.509 certificate, not a DNS record. The certificate was reissued, but the question focuses on updating DNS records.

G. SOA (Start of Authority):
A DNS record with administrative information about the zone (e.g., primary nameserver, serial number). Updating it wouldn't fix email authentication.

H. MX (Mail Exchanger):
Directs email to the correct mail server. If this was wrong, emails wouldn't be delivered at all (not just to spam).

Reference:
This falls under Domain 3.0: Security Engineering and Cryptography (email security) and Domain 2.0: Security Operations (troubleshooting). Proper configuration of SPF, DKIM, and DMARC in DNS is critical for email deliverability and preventing spam classification.

A developer needs to improve the cryptographic strength of a password-storage component in a web application without completely replacing the crypto-module. Which of the following is the most appropriate technique?


A. Key splitting


B. Key escrow


C. Key rotation


D. Key encryption


E. Key stretching





E.
  Key stretching

Explanation:

Why E is Correct:
Key stretching is a technique specifically designed to strengthen weak passwords, such as those entered by users. It works by taking a password and passing it through a computationally intensive algorithm (like PBKDF2, bcrypt, or Argon2) that requires a significant amount of time and resources to compute. This dramatically increases the effort required for an attacker to perform a brute-force or dictionary attack, as each guess must go through the same slow process. This can be implemented on top of the existing hashing mechanism (e.g., moving from a single SHA-256 hash to PBKDF2 with SHA-256 and a high iteration count) without necessarily replacing the entire underlying cryptographic module.

Why A is Incorrect:
Key splitting involves dividing a cryptographic key into multiple parts (shards) that are distributed to different entities. This is used for securing keys and enforcing control, not for strengthening the cryptographic process of password derivation.

Why B is Incorrect:
Key escrow is the process of depositing a cryptographic key with a trusted third party to be stored for emergency access (e.g., by law enforcement). This is a governance and recovery mechanism, not a technique for improving cryptographic strength.

Why C is Incorrect:
Key rotation is the practice of retiring an encryption key and replacing it with a new one at regular intervals. This is a vital practice for limiting the blast radius of a potential key compromise but does not inherently make the algorithm used to derive a key from a password any stronger. The password-to-key process could still be weak and vulnerable to attack.

Why D is Incorrect:
Key encryption (or key wrapping) is the process of encrypting one key with another key. This is used for secure key storage and transmission. While the stored password hashes should be encrypted at rest, this is a separate control. The core weakness of simple password hashing is the speed of the hashing operation, which key encryption does not address.

Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It specifically addresses cryptographic techniques and their appropriate application, focusing on secure password storage mechanisms as outlined in best practices and standards like NIST SP 800-63B.

A security engineer performed a code scan that resulted in many false positives. The security engineer must find a solution that improves the quality of scanning results before application deployment. Which of the following is the best solution?


A. Limiting the tool to a specific coding language and tuning the rule set


B. Configuring branch protection rules and dependency checks


C. Using an application vulnerability scanner to identify coding flaws in production


D. Performing updates on code libraries before code development





A.
  Limiting the tool to a specific coding language and tuning the rule set

Explanation:

Why A is Correct:
This is the most direct and effective solution to the specific problem of "many false positives" from a code scan. Static Application Security Testing (SAST) tools are notorious for generating false positives, which can overwhelm developers and lead to real issues being ignored.

Limiting to a specific language:
SAST tools perform best when they are optimized for a particular language's syntax and common pitfalls. Running a tool configured for multiple languages against a codebase written primarily in one language can trigger irrelevant rules and generate false positives.

Tuning the rule set:
This is the critical step for reducing false positives. It involves customizing the tool's rules to match the application's specific framework, libraries, and architecture. This can include:

Disabling rules that are not relevant to the project.

Adjusting the severity of certain findings.

Creating custom rules to ignore known benign patterns specific to the codebase.

Providing the tool with paths to custom libraries so it can accurately track data flow.

Tuning transforms a generic scanner into a precise tool tailored to the environment, dramatically improving the signal-to-noise ratio.

Why B is Incorrect:
Configuring branch protection rules (e.g., requiring pull requests and approvals before merging) and dependency checks (SCA - Software Composition Analysis) are excellent DevOps security practices. However, they address different problems. Branch protection enforces process, and dependency checks find vulnerabilities in third-party libraries. Neither practice directly reduces the false positive rate of a SAST tool scanning custom code for flaws.

Why C is Incorrect:
Using an application vulnerability scanner (DAST - Dynamic Application Security Testing) in production is a reactive measure. It finds vulnerabilities in a running application after it has been deployed. The question is about improving the scan results before deployment. Furthermore, running a DAST tool does not fix the root cause of the poor results from the SAST (code scan) tool; it simply uses a different, later-stage tool to find a different class of issues.

Why D is Incorrect:
Updating code libraries is a crucial maintenance activity for patching known vulnerabilities in dependencies (addressed by SCA tools). However, it has no bearing on the accuracy of a SAST tool scanning the company's own custom code for logical flaws and coding errors. The false positives are generated by the tool's analysis of the code structure, not by the version of the libraries used during development.

Reference:
This question falls under Domain 2.0: Security Operations, specifically concerning security testing in the development lifecycle and the integration and management of tools like SAST to improve software security. It also touches on the analytical skill of selecting the correct mitigation for a given problem.

Audit findings indicate several user endpoints are not utilizing full disk encryption During me remediation process, a compliance analyst reviews the testing details for the endpoints and notes the endpoint device configuration does not support full disk encryption Which of the following is the most likely reason me device must be replaced'


A. The HSM is outdated and no longer supported by the manufacturer


B. The vTPM was not properly initialized and is corrupt.


C. The HSM is vulnerable to common exploits and a firmware upgrade is needed


D. The motherboard was not configured with a TPM from the OEM supplier.


E. The HSM does not support sealing storage





D.
  The motherboard was not configured with a TPM from the OEM supplier.

Explanation:

Why D is Correct:
Full disk encryption (FDE) solutions like BitLocker (Windows) or FileVault (macOS) have a strict hardware requirement: a Trusted Platform Module (TPM). A TPM is a dedicated cryptographic processor chip soldered onto the computer's motherboard.

If the audit finding states that the device configuration "does not support full disk encryption," the most fundamental and common reason is that the motherboard lacks this specific hardware component entirely.
Older computers or some very low-cost models were manufactured and sold without a TPM chip. Since the TPM is a physical hardware requirement, it cannot be added via software. The only remediation for such a device is to replace it with hardware that meets the compliance requirement (i.e., a motherboard with a TPM).

Why A, C, and E are Incorrect (HSM):
These options incorrectly refer to an HSM (Hardware Security Module). An HSM is a high-performance, external, or PCIe-based network device used to manage and protect cryptographic keys for servers, certificate authorities, and critical infrastructure. HSMs are not used for standard endpoint full-disk encryption. Endpoints use a TPM, which is a much smaller, cheaper, and less powerful cryptographic co-processor designed specifically for this purpose. Confusing TPM and HSM is a common distractor in exam questions.

Why B is Incorrect (vTPM):
A vTPM (virtual TPM) is a software-based implementation of a TPM used in virtual machines to provide the same functionality. The question is about physical "user endpoints" (e.g., laptops, desktops). A vTPM is not relevant to the physical hardware of an endpoint device. Furthermore, if a vTPM were corrupt, it could potentially be re-initialized or re-provisioned through software or hypervisor management, not necessarily requiring a full hardware replacement.

Reference:
This question falls under Domain 1.0: Security Architecture and Domain 4.0: Governance, Risk, and Compliance. It tests knowledge of hardware security capabilities (TPM vs. HSM) and the practical implications of enforcing compliance policies that have specific hardware requirements. Understanding the "why" behind a control is crucial for a CASP+.

Which of the following AI concerns is most adequately addressed by input sanitation?


A. Model inversion


B. Prompt Injection


C. Data poisoning


D. Non-explainable model





B.
  Prompt Injection

Explanation:

Why B is Correct:
Prompt injection is a vulnerability specific to AI systems that use text-based prompts, particularly Large Language Models (LLMs). It occurs when an attacker crafts a malicious input (a "prompt") that tricks the model into ignoring its original instructions, bypassing safety filters, or revealing sensitive information. Input sanitation is a primary defense against this attack. It involves rigorously validating, filtering, and escaping all user-provided input before it is passed to the AI model. This helps to neutralize or render ineffective any malicious instructions embedded within the user's input, thereby preventing the model from being hijacked.

Why A is Incorrect:
Model inversion is an attack where an adversary uses the model's outputs (e.g., API responses) to reverse-engineer and infer sensitive details about the training data. This is addressed by controls on the output side (e.g., differential privacy, output filtering, limiting API response details) and model design, not by sanitizing the input prompts.

Why C is Incorrect:
Data poisoning is an attack on the training phase of an AI model. An attacker injects malicious or corrupted data into the training set to compromise the model's performance, integrity, or behavior after deployment. Defending against this requires securing the data collection and curation pipeline, using robust training techniques, and validating training data—measures that are completely separate from sanitizing runtime user input.

Why D is Incorrect:
A non-explainable model (often called a "black box" model) is a characteristic of certain complex AI algorithms where it is difficult for humans to understand why a specific decision was made. This is an inherent challenge of the model's architecture (e.g., deep neural networks) and is addressed by the field of Explainable AI (XAI), which involves using different models, tools, and techniques to interpret them. Input sanitation has no bearing on making a model's decisions more explainable.

Reference:
This question falls under the intersection of Domain 1.0: Security Architecture and emerging technologies. It tests the understanding of specific threats to AI systems and the appropriate security controls to mitigate them. Input validation/sanitation is a classic application security control that finds a new critical application in protecting AI systems from prompt injection attacks.

A security architect for a global organization with a distributed workforce recently received funding lo deploy a CASB solution Which of the following most likely explains the choice to use a proxy-based CASB?


A. The capability to block unapproved applications and services is possible


B. Privacy compliance obligations are bypassed when using a user-based deployment.


C. Protecting and regularly rotating API secret keys requires a significant time commitment


D. Corporate devices cannot receive certificates when not connected to on-premises devices





A.
  The capability to block unapproved applications and services is possible

Explanation:
A Cloud Access Security Broker (CASB) is a security policy enforcement point that sits between users and cloud service providers. There are two primary deployment modes: API-based and proxy-based.

Why A is Correct:
A proxy-based CASB operates in-line, intercepting traffic in real-time between the user and the cloud application. This allows it to enforce granular access controls and policies immediately. Specifically, it can:

Block unapproved applications and services in real-time by denying connections to unauthorized cloud services.

Inspect and control data transfers (e.g., prevent uploads to personal cloud storage).

Enforce encryption and data loss prevention (DLP) policies on the fly.

This real-time blocking capability is a key advantage of proxy-based CASBs over API-based solutions, which are more focused on post-hoc monitoring and remediation.

Why B is Incorrect:
Privacy compliance obligations (e.g., GDPR, CCPA) are never "bypassed" by any deployment model. In fact, a CASB helps enforce compliance. User-based deployments (e.g., forward proxy) still must comply with privacy laws, and the deployment choice does not negate these obligations.

Why C is Incorrect:
While managing API keys for an API-based CASB can be administratively burdensome, this is not the primary reason for choosing a proxy-based CASB. The key differentiator is the need for real-time enforcement (like blocking) rather than just visibility and retrospective controls.

Why D is Incorrect:
Certificates for authentication (e.g., for SSL inspection) can be deployed to corporate devices remotely using mobile device management (MDM) or similar tools, regardless of whether they are connected on-premises. This is not a significant barrier and is not the main driver for selecting a proxy-based CASB.

Reference:
This question falls under Domain 1.0: Security Architecture. It tests the understanding of CASB deployment modes and their respective strengths. Proxy-based CASBs are chosen when real-time control and blocking are required, which aligns with the need to enforce policies for a distributed workforce accessing cloud services.

A security engineer is given the following requirements:

• An endpoint must only execute Internally signed applications

• Administrator accounts cannot install unauthorized software.

• Attempts to run unauthorized software must be logged

Which of the following best meets these requirements?


A. Maintaining appropriate account access through directory management and controls


B. Implementing a CSPM platform to monitor updates being pushed to applications


C. Deploying an EDR solution to monitor and respond to software installation attempts


D. Configuring application control with blocked hashes and enterprise-trusted root certificates





D.
  Configuring application control with blocked hashes and enterprise-trusted root certificates

Explanation:

The requirements are:

Only execute internally signed applications:
This requires whitelisting based on code signing.

Prevent administrator accounts from installing unauthorized software: This requires enforcement that overrides even admin privileges.

Log attempts to run unauthorized software:
This requires detailed auditing of execution attempts.

Option D best meets all these requirements:
Application control (e.g., Windows AppLocker or SRP) can be configured to:

Allow only applications signed with enterprise-trusted root certificates (e.g., your organization's internal code signing certificate). This ensures only internally signed software runs.

Block hashes of specific unauthorized applications if needed.

Enforce policies that apply to all users, including administrators, preventing them from running unauthorized installers or executables.

Log all attempts to execute blocked software for auditing and alerting.

Why the other options are incorrect:

A) Maintaining account access through directory management:
While directory controls (e.g., limiting admin privileges) can help, they are not foolproof. Administrators may still have privileges, and this approach does not directly enforce code signing or log execution attempts.

B) Implementing a CSPM (Cloud Security Posture Management):
CSPM is for securing cloud infrastructure (e.g., misconfigurations in AWS/Azure). It does not control endpoint software execution or logging.

C) Deploying an EDR (Endpoint Detection and Response):
EDR is great for monitoring and responding to threats, but it is primarily detective rather than preventive. It might log installation attempts but cannot inherently prevent administrators from running unauthorized software or enforce code signing policies. Application control (option D) is the preventive measure.

Reference:
This aligns with Domain 1.0: Security Architecture (endpoint security) and Domain 2.0: Security Operations (policy enforcement). Application control with code signing is a best practice for locking down endpoints and meeting strict compliance requirements.


Page 4 out of 18 Pages
PreviousNext
123456
CAS-005 Practice Test Home

What Makes Our CompTIA SecurityX Certification Practice Test So Effective?

Real-World Scenario Mastery: Our CAS-005 practice exam don't just test definitions. They present you with the same complex, scenario-based problems you'll encounter on the actual exam.

Strategic Weakness Identification: Each practice session reveals exactly where you stand. Discover which domains need more attention, before CompTIA SecurityX Certification exam day arrives.

Confidence Through Familiarity: There's no substitute for knowing what to expect. When you've worked through our comprehensive CAS-005 practice exam questions pool covering all topics, the real exam feels like just another practice session.