SY0-701 Practice Test Questions

715 Questions


Which of the following is the final step of the modem response process?


A. Lessons learned


B. Eradication


C. Containment


D. Recovery





A.
  Lessons learned

Explanation:
The modern incident response process is typically cyclical, following a framework like the NIST SP 800-61 guide, which outlines these key phases: Preparation

Detection & Analysis

Containment

Eradication

Recovery

Post-Incident Activity (Lessons Learned)

The Lessons Learned phase is the final step. In this phase, the team:

Reviews what happened during the incident.

Identifies what was done well and what could be improved.

Updates the Incident Response Plan (IRP), policies, and procedures based on these findings.

Implements new security controls to prevent a recurrence.

This final step is crucial for closing the loop and improving the organization's security posture for future incidents.

Why not B?
Eradication: This is the step where the root cause of the incident (e.g., malware, threat actor access) is removed from the environment. It occurs before recovery.

Why not C?
Containment: This is an early reactive step focused on limiting the damage of an ongoing incident. It occurs before eradication and recovery.

Why not D?
Recovery: This step involves restoring systems and operations to normal. While it occurs late in the process, it is followed by the final, formal post-incident analysis (Lessons Learned).

Reference:
Domain 4.4: "Given an incident, apply mitigation techniques or controls to secure an environment." The SY0-701 objectives require knowledge of the incident response lifecycle, with the final phase being a post-incident lessons learned meeting and report to improve future response efforts.

A systems administrator is creating a script that would save time and prevent human error when performing account creation for a large number of end users. Which of the following would be a good use case for this task?


A. Off-the-shelf software


B. Orchestration


C. Baseline


D. Policy enforcement





B.
  Orchestration

Explanation:
Orchestration refers to the automated coordination and management of multiple tasks or systems to streamline complex processes. In this scenario, creating a script to automate account creation for a large number of end users is a perfect use case for orchestration. The script would:

Automate repetitive steps (e.g., user input, assigning permissions, adding to groups).

Ensure consistency and accuracy, reducing human error.

Save significant time compared to manual account creation.

Orchestration tools (e.g., Ansible, Puppet, or custom scripts) are commonly used for such administrative tasks to improve efficiency and reliability.

Why not the others?

A. Off-the-shelf software:
Pre-built software might handle account creation (e.g., identity management tools), but it may not be customizable for specific needs. Orchestration via scripting allows tailored automation.

C. Baseline:
A baseline is a standard configuration or state for systems, not a tool for automating tasks.

D. Policy enforcement:
This ensures compliance with rules (e.g., password policies), but it does not automate the account creation process itself.

Reference:
Domain 3.2: "Given a scenario, implement security hardening practices." Automation and orchestration are key for efficiently managing large-scale operations like user provisioning while maintaining security consistency. The SY0-701 objectives highlight orchestration as a method to reduce errors and enforce policies.

A security manager created new documentation to use in response to various types of security incidents. Which of the following is the next step the manager should take?


A. Set the maximum data retention policy.


B. Securely store the documents on an air-gapped network.


C. Review the documents' data classification policy.


D. Conduct a tabletop exercise with the team.





D.
  Conduct a tabletop exercise with the team.

Explanation: A tabletop exercise is a simulated scenario that tests the effectiveness of a security incident response plan. It involves gathering the relevant stakeholders and walking through the steps of the plan, identifying any gaps or issues that need to be addressed. A tabletop exercise is a good way to validate the documentation created by the security manager and ensure that the team is prepared for various types of security incidents. References: CompTIA Security+ Study Guide: Exam SY0-701, 9th Edition, Chapter 6: Risk Management, page 2841. CompTIA Security+ Certification Kit: Exam SY0-701, 7th Edition, Chapter 6: Risk Management, page 2842.

An organization recently started hosting a new service that customers access through a web portal. A security engineer needs to add to the existing security devices a new solution to protect this new service. Which of the following is the engineer most likely to deploy?


A. Layer 4 firewall


B. NGFW


C. WAF


D. UTM





C.
  WAF

Explanation:
A WAF (Web Application Firewall) is specifically designed to protect web applications and services by monitoring, filtering, and blocking HTTP/HTTPS traffic between a web application and the Internet. It is the most appropriate solution for protecting a new web portal that customers access, as it defends against web-based attacks such as SQL injection, cross-site scripting (XSS), and other application-layer vulnerabilities that traditional firewalls might miss.

Why not A?
Layer 4 firewall: A Layer 4 firewall (traditional firewall) operates at the transport layer (TCP/UDP) and filters traffic based on IP addresses, ports, and protocols. It lacks the deep packet inspection capabilities needed to understand web application traffic and protect against application-layer attacks.

Why not B?
NGFW (Next-Generation Firewall): An NGFW includes additional features beyond traditional firewalls, such as application awareness, intrusion prevention, and deep packet inspection. While it can provide some web application protection, a dedicated WAF is more specialized and effective for securing web portals against sophisticated application-level threats.

Why not D?
UTM (Unified Threat Management): A UTM device combines multiple security features (firewall, IPS, antivirus, etc.) into a single platform. It may include WAF functionality, but it is often less specialized than a standalone WAF. For critical web services, a dedicated WAF is preferred for robust protection.

Reference:
Domain 3.2: "Given a scenario, implement secure network architecture concepts." The SY0-701 objectives emphasize the use of specialized security devices like WAFs to protect web applications. This aligns with best practices for securing customer-facing web portals against common web-based attacks.

A company is developing a critical system for the government and storing project information on a fileshare. Which of the following describes how this data will most likely be classified? (Select two).


A. Private


B. Confidential


C. Public


D. Operational


E. Urgent


F. Restricted





B.
  Confidential

F.
  Restricted

Explanation:
For a critical government system, data classification is typically stringent and based on sensitivity and impact. The project information stored on a fileshare would most likely be classified as:

B. Confidential:
Government projects often involve sensitive information related to national security, defense, or critical infrastructure. "Confidential" is a standard classification tier for data whose unauthorized disclosure could cause damage to national security or government operations. This aligns with the context of a "critical system for the government."

D. Operational:
This classification refers to data essential for day-to-day operations. Project information for a critical system would include details necessary for development, deployment, and maintenance (e.g., design documents, configurations, timelines). Unauthorized access could disrupt operations or compromise system integrity.

Why not the others?

A. Private:
This typically pertains to personal data (e.g., employee or citizen information) protected by privacy laws. While the project might include private data, the overarching classification for government-critical project data is more likely "Confidential" or "Operational."

C. Public:
Public data is non-sensitive and intended for open access. Critical government system details are not public.

E. Urgent:
"Urgent" is not a standard data classification tier; it describes a priority level for actions or communications, not data sensitivity.

Reference:
Domain 5.2: "Explain the importance of data privacy and protection." The SY0-701 objectives cover data classification schemes (e.g., Confidential, Operational) used in government and enterprise contexts to ensure sensitive information is handled appropriately.

An external vendor recently visited a company's headquarters tor a presentation. Following the visit a member of the hosting team found a file that the external vendor left behind on a server. The file contained detailed architecture information and code snippets. Which of the following data types best describes this file?


A. Government


B. Public


C. Proprietary


D. Critical





C.
  Proprietary

Explanation:

Why C is Correct:
Proprietary data refers to information that is owned by an organization and is central to its business operations, competitive advantage, or unique value. This includes trade secrets, intellectual property, internal processes, source code, and detailed architecture designs. The file described, containing "detailed architecture information and code snippets," is a classic example of proprietary data. It is confidential information that, if disclosed to competitors, could cause significant harm to the company that owns it.

Why A is Incorrect:
Government data is information that is classified or owned by a government entity (e.g., Top Secret, Secret, Confidential). Unless the company in question is a government contractor working on a classified project, this internal architecture and code would not be categorized as government data.

Why B is Incorrect:
Public data is information that has been deliberately released to the public or is intended for public consumption, such as marketing brochures or published annual reports. The sensitive nature of the file's contents clearly indicates it was never meant to be public.

Why D is Incorrect:
While this data is certainly critical to the company, "critical" is a descriptive term for the data's importance rather than a formal data classification type. Data classification schemes typically use labels like Public, Private, Proprietary, Confidential, and Internal. "Proprietary" is the most precise and technically correct classification for this type of sensitive intellectual property.

Reference:
This question falls under Domain 5.0: Governance, Risk, and Compliance (GRC), specifically covering data governance and classification. Understanding how to categorize data based on its sensitivity and value is a fundamental security practice.

Security controls in a data center are being reviewed to ensure data is properly protected and that human life considerations are included. Which of the following best describes how the controls should be set up?


A. Remote access points should fail closed.


B. Logging controls should fail open.


C. Safety controls should fail open.


D. Logical security controls should fail closed.





C.
  Safety controls should fail open.

Explanation:
The principle of "fail-safe" or "fail-secure" is applied differently depending on the type of control. For safety controls, which are designed to protect human life and physical well-being, the default behavior during a failure (e.g., power loss, system malfunction) must be to fail open. This means that in the event of a failure, the control defaults to a state that allows people to escape or remain safe. A classic example is a mantrap or emergency exit door; if the power fails, the door must unlock (fail open) to allow people to exit, rather than trapping them inside a potentially dangerous situation like a fire.

Analysis of Incorrect Options:

A. Remote access points should fail closed:
This is generally correct for security but not directly related to human life. Remote access points (like VPN gateways) should fail closed (deny access) to prevent unauthorized entry if a system failure occurs. However, this prioritizes security over safety and is not the best answer given the explicit requirement for "human life considerations."

B. Logging controls should fail open:
Logging controls are detective, not preventive. There is no universal "fail open" or "fail closed" state for logging. If a logging system fails, it typically stops recording events (which is a failure), but this does not directly impact safety or access. The phrase "fail open" does not logically apply to logging.

D. Logical security controls should fail closed:
This is a correct security principle for logical (technical) controls like firewalls or authentication systems. They should fail closed (deny access) to maintain security in the event of a failure. However, again, this does not address the "human life considerations" highlighted in the question.

Reference:
This question integrates concepts from Domain 1.0: General Security Concepts (security controls) and physical safety principles. The key is understanding the critical distinction between:

Fail-Secure (Fail Closed):
Preferred for security controls (e.g., doors remain locked during power loss to prevent unauthorized access).

Fail-Safe (Fail Open):
Required for safety controls (e.g., doors unlock during power loss to allow evacuation).

This balance between security and life safety is a fundamental aspect of physical security design in data centers and other facilities.

Which of the following is the first step to take when creating an anomaly detection process?


A. Selecting events


B. Building a baseline


C. Selecting logging options


D. Creating an event log





B.
  Building a baseline

Explanation: The first step in creating an anomaly detection process is building a baseline of normal behavior within the system. This baseline serves as a reference point to identify deviations or anomalies that could indicate a security incident. By understanding what normal activity looks like, security teams can more effectively detect and respond to suspicious behavior.

Which of the following methods would most likely be used to identify legacy systems?


A. Bug bounty program


B. Vulnerability scan


C. Package monitoring


D. Dynamic analysis





B.
  Vulnerability scan

Explanation:

The correct answer is B. Vulnerability scan.
A vulnerability scan is an automated, high-level test that proactively scans a network to identify known vulnerabilities, misconfigurations, and missing patches in systems, applications, and network devices.

Identifying legacy systems is a primary function of a vulnerability scanner. These tools work by probing IP addresses and comparing the responses (e.g., open ports, running services, banner information, system responses) against a database of known signatures.

Legacy systems are characterized by outdated operating systems (e.g., Windows XP, Windows 7, old Linux kernels), end-of-life software, and services running outdated protocols. A vulnerability scanner would quickly flag these systems for running unsupported OS versions, having known critical vulnerabilities for which no patch exists, or using insecure ciphers and protocols (e.g., SSLv2, TLS 1.0, SMBv1).

The scan report would provide a clear inventory of these non-compliant, legacy systems, allowing the security team to prioritize them for remediation, isolation, or replacement.

Why the other options are incorrect:

A. Bug bounty program:
A bug bounty program is a crowdsourced initiative where external security researchers are incentivized to find and report vulnerabilities in a company's public-facing applications (e.g., websites, web apps). It is not a method for discovering internal, networked legacy systems. These programs are targeted and scoped, not broad network discovery tools.

C. Package monitoring:
Package monitoring tools track software packages and dependencies on a system, often for the purpose of managing updates or detecting unauthorized software changes. While it could tell you that an individual system has old software installed, it is not an efficient method for discovering and inventorying all legacy systems across an entire network. You would first need to know which systems to point the monitor at.

D. Dynamic analysis:
Dynamic analysis is a security testing method that involves executing code or running software to analyze its behavior for vulnerabilities. It is used primarily on applications (e.g., web apps, binaries) in a sandboxed environment to find flaws like memory leaks or input validation errors. It is not a network discovery tool and is not used to identify legacy operating systems or network devices.

Reference:
This aligns with the purpose of vulnerability scanning as defined in the CompTIA Security+ SY0-701 objectives, particularly under Domain 1.1: Given a scenario, analyze indicators of malicious activity. Part of threat intelligence and analysis is knowing your attack surface, which is impossible without a complete inventory of assets, including legacy systems. Vulnerability management programs, which start with scanning, are the primary method for achieving this. Tools like Nessus, Qualys, and OpenVAS are classic examples of vulnerability scanners that excel at identifying legacy systems and reporting on their associated risks

An organization maintains intellectual property that it wants to protect. Which of the following concepts would be most beneficial to add to the company's security awareness training program?


A. Insider threat detection


B. Simulated threats


C. Phishing awareness


D. Business continuity planning





A.
  Insider threat detection

Explanation:

Why A is Correct:
Intellectual property (IP) is most vulnerable to threats from within an organization. Insiders (employees, contractors) have legitimate access to sensitive data and are therefore in the best position to steal it, whether maliciously or accidentally. Adding insider threat detection to security awareness training educates employees on:

Recognizing behaviors that may indicate an insider threat (e.g., unauthorized data access, attempts to bypass controls).

Understanding the policies and procedures for protecting IP.

Knowing how to report suspicious activity anonymously.

This focus directly addresses the primary risk to intellectual property by turning the entire workforce into a proactive layer of defense.

Why B is Incorrect:
Simulated threats (like phishing simulations) are a valuable training methodology for teaching employees to recognize attacks. However, it is a technique, not a core concept. The question asks for the most beneficial concept to add. While simulated phishing could be part of training, the overarching need is to address the specific risk of IP theft by insiders.

Why C is Incorrect:
Phishing awareness is critical for defending against external threats that try to trick employees into revealing credentials or installing malware. While important, it is not the most beneficial concept for protecting intellectual property. IP is more often compromised through intentional insider theft, accidental leakage, or poor internal controls than through phishing alone.

Why D is Incorrect:
Business continuity planning (BCP) focuses on maintaining operations during and after a disaster (e.g., natural disaster, cyberattack). It is about availability and recovery, not primarily about protecting the confidentiality of intellectual property from theft or leakage.

Reference:
This question falls under Domain 5.0: Governance, Risk, and Compliance (GRC), specifically covering security awareness and training programs tailored to organizational risks. Protecting intellectual property requires a strong focus on insider risk, making insider threat detection a key training topic.

A company wants to verify that the software the company is deploying came from the vendor the company purchased the software from. Which of the following is the best way for the company to confirm this information?


A. Validate the code signature.


B. Execute the code in a sandbox.


C. Search the executable for ASCII strings.


D. Generate a hash of the files.





A.
  Validate the code signature.

Explanation:
A) Validate the code signature is the correct answer.

Code signing is a process where software vendors digitally sign their software using a private key. The corresponding public key is used to verify the signature. By validating the code signature, the company can:

Authenticate the source:
Confirm the software indeed came from the claimed vendor.

Ensure integrity:
Verify that the software has not been tampered with since it was signed by the vendor.

This provides a direct and reliable method to verify both the origin and integrity of the software.

Why the others are incorrect:

B) Execute the code in a sandbox:
Sandboxing is used to observe the behavior of software in an isolated environment (e.g., to detect malware). It does not verify the source of the software—only how it behaves.

C) Search the executable for ASCII strings:
This might reveal metadata or human-readable text (e.g., vendor names) but is easily spoofed and not a secure method for verification. Attackers can embed false information in malicious software.

D) Generate a hash of the files:
Hashing (e.g., SHA-256) can verify integrity (that the file hasn’t changed) if the company has a trusted hash provided by the vendor. However, it does not authenticate the source. If the company obtains the hash from an untrusted location (e.g., a compromised website), it could be misled. Code signing combines authentication and integrity.

Reference:
This question tests knowledge of Domain 3.2: Given a scenario, implement security hardening strategies and Domain 2.8: Summarize the basics of cryptographic concepts. Code signing is a industry-standard practice for verifying software provenance and is emphasized in the SY0-701 objectives for secure software deployment. It leverages public key infrastructure (PKI) to provide trust.

A small business uses kiosks on the sales floor to display product information for customers. A security team discovers the kiosks use end-of-life operating systems. Which of the following is the security team most likely to document as a security implication of the current architecture?


A. Patch availability


B. Product software compatibility


C. Ease of recovery


D. Cost of replacement





A.
  Patch availability

Explanation:
An end-of-life (EOL) or end-of-service-life (EOSL) operating system no longer receives security patches, updates, or vulnerability fixes from the vendor. This is the most critical security implication because it means any newly discovered vulnerabilities in the OS will remain unpatched, leaving the kiosks permanently exposed to exploits. Attackers often target EOL systems precisely because they know these vulnerabilities will never be fixed.

Why the others are incorrect:

B) Product software compatibility:
While compatibility might be a concern for functionality, it is not the primary security implication. The question specifically asks for a security implication, and the lack of patches is a direct and severe security risk.

C) Ease of recovery:
This refers to how quickly a system can be restored after a failure. While EOL systems might be harder to recover due to outdated drivers or lack of support, this is an operational concern, not the most direct security implication.

D) Cost of replacement:
This is a financial or business consideration. While upgrading from EOL systems incurs costs, the security team's focus in documentation would be on the risk (e.g., unpatched vulnerabilities), not the financial impact.

Reference:
This aligns with SY0-701 Objective 2.3 ("Explain security implications of embedded and specialized systems"). Kiosks are a type of specialized system, and using EOL software is a major vulnerability. The security implication of missing patches and the inability to remediate vulnerabilities is a core concept in risk management and is emphasized in frameworks like NIST SP 800-40 (Guide to Enterprise Patch Management Planning).


Page 13 out of 60 Pages
Previous