Which of the following is required for an organization to properly manage its restore process in the event of system failure?
A. IRP
B. DRP
C. RPO
D. SDLC
Explanation: A disaster recovery plan (DRP) is a set of policies and procedures that aim to restore the normal operations of an organization in the event of a system failure, natural disaster, or other emergency. A DRP typically includes the following elements:
A risk assessment that identifies the potential threats and impacts to the organization’s critical assets and processes.
A business impact analysis that prioritizes the recovery of the most essential functions and data.
A recovery strategy that defines the roles and responsibilities of the recovery team, the resources and tools needed, and the steps to follow to restore the system.
A testing and maintenance plan that ensures the DRP is updated and validated regularly. A DRP is required for an organization to properly manage its restore process in the event of system failure, as it provides a clear and structured framework for recovering from a disaster and minimizing the downtime and data loss.
References = CompTIA Security+ Study Guide (SY0-701), Chapter 7: Resilience and Recovery, page 325.
A network administrator is working on a project to deploy a load balancer in the company's cloud environment. Which of the following fundamental security requirements does this project fulfill?
A. Privacy
B. Integrity
C. Confidentiality
D. Availability
Explanation: Deploying a load balancer in the company's cloud environment primarily fulfills the fundamental security requirement of availability. A load balancer distributes incoming network traffic across multiple servers, ensuring that no single server becomes overwhelmed and that the service remains available even if some servers fail.
Availability: Ensures that services and resources are accessible when needed, which is directly supported by load balancing.
Privacy: Protects personal and sensitive information from unauthorized access but is not directly related to load balancing.
Integrity: Ensures that data is accurate and has not been tampered with, but load balancing is not primarily focused on data integrity.
Confidentiality: Ensures that information is accessible only to authorized individuals, which is not the primary concern of load balancing.
Reference: CompTIA Security+ SY0-701 Exam Objectives, Domain 1.2 - Summarize fundamental security concepts (Availability).
Which of the following threat actors is the most likely to use large financial resources to attack critical systems located in other countries?
A. Insider
B. Unskilled attacker
C. Nation-state
D. Hacktivist
Explanation:
Nation-state threat actors are sponsored by governments and have access to significant financial resources, advanced tools, and skilled personnel. Their motivations often include espionage, sabotage, or gaining geopolitical advantages, and they frequently target critical infrastructure (e.g., energy grids, financial systems) in other countries. These attacks are typically well-funded, sophisticated, and persistent.
Why the others are incorrect:
A) Insider:
An insider threat comes from within the organization (e.g., a disgruntled employee) and may have legitimate access, but they generally lack the large-scale financial resources and strategic focus of a nation-state.
B) Unskilled attacker:
Often referred to as "script kiddies," these individuals use pre-existing tools and scripts without deep technical knowledge. They typically do not have substantial financial resources and are more likely to target low-hanging fruit rather than critical systems in other countries.
D) Hacktivist:
Hacktivists are motivated by ideology or social causes and may disrupt services or deface websites, but they usually lack the extensive funding and long-term strategic goals of nation-states. Their actions are often more visible and disruptive rather than stealthy and targeted at critical infrastructure.
Reference:
This aligns with SY0-701 Objective 1.5 ("Compare and contrast different types of threat actors and their motivations"). Nation-states are characterized by their extensive resources, advanced capabilities, and strategic objectives, making them the most likely to attack critical systems in other countries.
An engineer moved to another team and is unable to access the new team's shared folders while still being able to access the shared folders from the former team. After opening a ticket, the engineer discovers that the account was never moved to the new group. Which of the following access controls is most likely causing the lack of access?
A. Role-based
B. Discretionary
C. Time of day
D. Least privilege
Explanation: The most likely access control causing the lack of access is role-based access control (RBAC). In RBAC, access to resources is determined by the roles assigned to users. Since the engineer's account was not moved to the new group's role, the engineer does not have the necessary permissions to access the new team's shared folders.
Role-based access control (RBAC): Assigns permissions based on the user's role within the organization. If the engineer's role does not include the new group's permissions, access will be denied.
Discretionary access control (DAC): Access is based on the discretion of the data owner, but it is not typically related to group membership changes.
Time of day: Restricts access based on the time but does not affect group memberships.
Least privilege: Ensures users have the minimum necessary permissions, but the issue here is about group membership, not the principle of least privilege.
Reference: CompTIA Security+ SY0-701 Exam Objectives, Domain 4.6 - Implement and maintain identity and access management (Role-based access control).
A company is implementing a vendor's security tool in the cloud. The security director does not want to manage users and passwords specific to this tool but would rather utilize the company's standard user directory. Which of the following should the company implement?
A. 802.1X
B. SAML
C. RADIUS
D. CHAP
Explanation:
This scenario describes a classic use case for federated identity and Single Sign-On (SSO). The goal is to allow users to log in to a third-party cloud application using their existing corporate credentials, eliminating the need to create and manage separate accounts in the vendor's system.
SAML (Security Assertion Markup Language) is an XML-based open standard designed specifically for this purpose. It allows an identity provider (IdP) – in this case, the company's standard user directory – to pass authentication and authorization credentials to a service provider (SP) – the vendor's cloud security tool.
Here’s how it works:
A user attempts to access the cloud tool.
The cloud tool (SP) redirects the user to the company's IdP.
The user authenticates against the company's user directory (e.g., with their Active Directory username and password).
The IdP generates a SAML assertion (a signed XML document) that vouches for the user's identity and any group memberships/attributes.
The IdP sends this assertion back to the cloud tool.
The cloud tool grants access based on the trusted assertion.
This process means the company never has to manage user accounts within the cloud tool itself, and users don't need to remember another password.
Why the other options are incorrect:
A. 802.1X:
This is a standard for network access control (NAC). It is used to control which devices are allowed to connect to a wired or wireless network (port-based access control). It typically uses RADIUS for authentication. It is not used for web-based SSO to cloud applications.
C. RADIUS (Remote Authentication Dial-In User Service):
RADIUS is a networking protocol that provides centralized Authentication, Authorization, and Accounting (AAA) management for users who connect and use a network service. It is commonly used for authenticating network administrators, VPN users, and in 802.1X scenarios. While it can centralize authentication, it is not the standard protocol for web-based federation and SSO with cloud service providers like SAML is.
D. CHAP (Challenge-Handshake Authentication Protocol):
CHAP is an older authentication protocol used primarily on point-to-point links (e.g., PPP connections). It is not used for modern web application SSO or cloud service integration. It lacks the robust, attribute-rich federation capabilities of SAML.
Exam Objective Reference:
This question relates to Domain 3.0: Architecture and Design, specifically the concepts of identity and access management and implementing federated identity with protocols like SAML. It also touches on cloud security models where integrating on-premises directories with cloud services is a common requirement.
Which of the following is a reason why a forensic specialist would create a plan to preserve data after an modem and prioritize the sequence for performing forensic analysis?
A. Order of volatility
B. Preservation of event logs
C. Chain of custody
D. Compliance with legal hold
Explanation:
Order of Volatility is a fundamental principle in digital forensics that dictates the sequence for collecting evidence based on how quickly that data can be lost or altered. Data that is most volatile (e.g., RAM, CPU cache) must be captured first, as it disappears when power is lost. Less volatile data (e.g., disk drives, backup tapes) can be collected later. Creating a plan based on this order ensures the most fragile and critical evidence is preserved before it is lost, which is the primary reason for prioritizing the sequence of a forensic analysis.
Why the other options are incorrect:
B. Preservation of event logs:
While preserving event logs is a crucial action within a forensic investigation, it is not the overarching reason for the sequence of the entire analysis. The order of volatility determines that certain logs in memory might need to be captured before those on disk.
C. Chain of custody:
This is a legal process that documents who handled evidence, when, and for what purpose to ensure its integrity and admissibility in court. It is essential for preserving the legal integrity of evidence after it has been collected, but it does not dictate the technical sequence of what data to collect first.
D. Compliance with legal hold:
A legal hold is a process to preserve all relevant information for litigation or investigation. It mandates that data must not be destroyed, but it does not provide guidance on the technical order in which to collect volatile digital evidence during a forensic examination.
Reference:
This question tests the core principles of digital forensics and incident response.
This falls under Domain 4.4: Explain the key aspects of digital forensics of the CompTIA Security+ SY0-701 exam objectives.
The concept of order of volatility is a standard best practice outlined in forensic guides, such as those from NIST (SP 800-86, Guide to Integrating Forensic Techniques into Incident Response) and RFC 3227 (Guidelines for Evidence Collection and Archiving). These guidelines emphasize collecting the most volatile evidence first to prevent data loss.
A manager receives an email that contains a link to receive a refund. After hovering over the link, the manager notices that the domain's URL points to a suspicious link. Which of the following security practices helped the manager to identify the attack?
A. End user training
B. Policy review
C. URL scanning
D. Plain text email
Explanation: The security practice that helped the manager identify the suspicious link is end-user training. Training users to recognize phishing attempts and other social engineering attacks, such as hovering over links to check the actual URL, is a critical component of an organization's security awareness program.
End user training: Educates employees on how to identify and respond to security threats, including suspicious emails and phishing attempts.
Policy review: Ensures that policies are understood and followed but does not directly help in identifying specific attacks.
URL scanning: Automatically checks URLs for threats, but the manager identified the issue manually.
Plain text email: Ensures email content is readable without executing scripts, but the identification in this case was due to user awareness.
Reference: CompTIA Security+ SY0-701 Exam Objectives, Domain 5.6 - Implement security awareness practices (End-user training).
A systems administrator is looking for a low-cost application-hosting solution that is cloud- based. Which of the following meets these requirements?
A. Serverless framework
B. Type 1 hvpervisor
C. SD-WAN
D. SDN
Explanation: A serverless framework is a cloud-based application-hosting solution that meets the requirements of low-cost and cloud-based. A serverless framework is a type of cloud computing service that allows developers to run applications without managing or provisioning any servers. The cloud provider handles the server- ide infrastructure, such as scaling, load balancing, security, and maintenance, and charges the developer only for the resources consumed by the application. A serverless framework enables developers to focus on the application logic and functionality, and reduces the operational costs and complexity of hosting applications. Some examples of serverless frameworks are AWS Lambda, Azure Functions, and Google Cloud Functions.
A type 1 hypervisor, SD-WAN, and SDN are not cloud-based application-hosting solutions that meet the requirements of low-cost and cloud-based. A type 1 hypervisor is a software layer that runs directly on the hardware and creates multiple virtual machines that can run different operating systems and applications. A type 1 hypervisor is not a cloud-based service, but a virtualization technology that can be used to create private or hybrid clouds. A type 1 hypervisor also requires the developer to manage and provision the servers and the virtual machines, which can increase the operational costs and complexity of hosting applications. Some examples of type 1 hypervisors are VMware ESXi, Microsoft Hyper-V, and Citrix XenServer.
SD-WAN (Software-Defined Wide Area Network) is a network architecture that uses software to dynamically route traffic across multiple WAN connections, such as broadband, LTE, or MPLS. SD-WAN is not a cloud-based service, but a network optimization technology that can improve the performance, reliability, and security of WAN connections. SD-WAN can be used to connect remote sites or users to cloud-based applications, but it does not host the applications itself. Some examples of SD-WAN vendors are Cisco, VMware, and Fortinet.
SDN (Software-Defined Networking) is a network architecture that decouples the control plane from the data plane, and uses a centralized controller to programmatically manage and configure the network devices and traffic flows. SDN is not a cloud-based service, but a network automation technology that can enhance the scalability, flexibility, and efficiency of the network. SDN can be used to create virtual networks or network functions that can support cloud-based applications, but it does not host the applications itself. Some examples of SDN vendors are OpenFlow, OpenDaylight, and OpenStack.
References = CompTIA Security+ SY0-701 Certification Study Guide, page 264- 265; Professor Messer’s CompTIA SY0-701 Security+ Training Course, video 3.1 - Cloud and Virtualization, 7:40 - 10:00; [Serverless Framework]; [Type 1 Hypervisor]; [SD-WAN]; [SDN].
A security analyst is assessing several company firewalls. Which of the following cools would The analyst most likely use to generate custom packets to use during the assessment?
A. hping
B. Wireshark
C. PowerShell
D. netstat
Explanation:
A) hping is the correct answer.
hping is a command-line network tool used to generate and manipulate custom TCP/IP packets. It is specifically designed for:
Firewall testing:
Sending crafted packets to test firewall rules and responses.
Network probing:
Assessing how networks and devices handle unusual or malicious packet structures.
Custom packet generation:
Creating packets with specified headers, flags, payloads, etc., to simulate various types of traffic or attacks.
This makes it ideal for a security analyst assessing firewall behavior.
Why the others are incorrect:
B) Wireshark:
This is a network protocol analyzer (packet sniffer) used for capturing and inspecting network traffic. It is excellent for analysis but cannot generate custom packets.
C) PowerShell:
While PowerShell has cmdlets for network testing (e.g., Test-NetConnection), it lacks the fine-grained control needed to generate custom packets like hping.
D) netstat:
This tool displays network connections, routing tables, and interface statistics. It is used for monitoring and diagnostics but cannot generate packets.
Reference:
This question tests knowledge of Domain 4.1: Given a scenario, analyze indicators of malicious activity and Domain 4.2: Explain the security implications of proper hardware, software, and data asset management. Tools like hping are essential for proactive security assessments, including firewall testing, as covered in the SY0-701 objectives.
Which of the following is the best way to secure an on-site data center against intrusion from an insider?
A. Bollards
B. Access badge
C. Motion sensor
D. Video surveillance
Explanation: To secure an on-site data center against intrusion from an insider, the best measure is to use an access badge system. Access badges control who can enter restricted areas by verifying their identity and permissions, thereby preventing unauthorized access from insiders.
Access badge: Provides controlled and monitored access to restricted areas, ensuring that only authorized personnel can enter.
Bollards: Provide physical barriers to prevent vehicle access but do not prevent unauthorized personnel entry.
Motion sensor: Detects movement but does not control or restrict access.
Video surveillance: Monitors and records activity but does not physically prevent intrusion.
Reference: CompTIA Security+ SY0-701 Exam Objectives, Domain 1.2 - Summarize fundamental security concepts (Physical security controls).
A security analyst is investigating an application server and discovers that software on the server is behaving abnormally. The software normally runs batch jobs locally and does not generate traffic, but the process is now generating outbound traffic over random high ports. Which of the following vulnerabilities has likely been exploited in this software?
A. Memory injection
B. Race condition
C. Side loading
D. SQL injection
Explanation: Memory injection vulnerabilities allow unauthorized code or commands to be executed within a software program, leading to abnormal behavior such as generating outbound traffic over random high ports. This issue often arises from software not properly validating or encoding input, which can be exploited by attackers to inject malicious code. References: CompTIA Security+ SY0-701 course content and official CompTIA study resources.
Which of the following is the most likely to be used to document risks, responsible parties, and thresholds?
A. Risk tolerance
B. Risk transfer
C. Risk register
D. Risk analysis
Explanation:
C) Risk register is the correct answer.
A risk register is a documented repository that tracks and details all identified risks within an organization. It typically includes:
Risks:
Descriptions of potential events or threats.
Responsible parties:
Owners assigned to manage or mitigate each risk.
Thresholds:
Risk appetite or tolerance levels (e.g., acceptable levels of impact or likelihood).
Additional details such as risk scores, mitigation strategies, and status updates.
This tool is essential for ongoing risk management and ensures accountability and transparency.
Why the others are incorrect:
A) Risk tolerance:
This refers to the level of risk an organization is willing to accept. It is a policy or guideline (often documented in the risk register) but not the comprehensive document itself.
B) Risk transfer:
This is a risk treatment strategy (e.g., purchasing insurance) where risk is shifted to a third party. It is an action taken for specific risks, not a documentation tool.
D) Risk analysis:
This is the process of identifying, assessing, and prioritizing risks. The output of risk analysis is often recorded in a risk register, but the analysis itself is not the document.
Reference:
This question tests knowledge of Domain 5.1: Explain the importance of risk management processes. The risk register is a foundational component of risk management frameworks (e.g., NIST RMF, ISO 27005), serving as a living document to track risks and responses over time.
Page 7 out of 60 Pages |
Previous |