SPLK-2002 Practice Test Questions

160 Questions


Which of the following is an indexer clustering requirement?


A. Must use shared storage.


B. Must reside on a dedicated rack.


C. Must have at least three members.


D. Must share the same license pool.





D.
  Must share the same license pool.

Explanation:
In Splunk indexer clustering, all cluster members must share the same license pool and be connected to the same license master. This ensures consistent enforcement of license usage across the cluster and prevents discrepancies in ingestion volume reporting.

A. Must use shared storage → Incorrect
Splunk indexer clustering does not require shared storage. Each indexer maintains its own local storage, and data replication across peers ensures redundancy.

B. Must reside on a dedicated rack → Incorrect
There is no requirement that indexers must be on a dedicated rack. This is a distractor; Splunk clustering is software-based and can run across varied hardware setups.

C. Must have at least three members → Incorrect
While three members are recommended for high availability (to satisfy replication and search factors), it is not a strict requirement. Clusters can technically run with two peers, though this is not ideal.

D. Must share the same license pool → Correct
This is a hard requirement. All cluster members must point to the same license master and share the same license pool for proper license enforcement.

Reference
Splunk Docs – About Splunk licenses

As a best practice, where should the internal licensing logs be stored?


A. Indexing layer.


B. License server.


C. Deployment layer.


D. Search head layer.





D.
  Search head layer.

Explanation
Splunk generates internal licensing logs that track license usage, violations, and compliance. As a best practice, these logs should be stored on the Search Head layer. The reason is that search heads are responsible for coordinating searches and reporting license usage across the environment. By storing licensing logs here, administrators can easily monitor and analyze license activity without impacting indexing performance.

A. Indexing layer → Incorrect Indexers handle data ingestion and storage. While they report license usage, they are not the recommended location for storing internal licensing logs.

B. License server → Incorrect The license master enforces license compliance but does not store the searchable internal logs. It manages license pools and violations but is not the best practice location for log storage.

C. Deployment layer → Incorrect The deployment server manages app distribution to clients. It has no role in license log storage.

D. Search head layer → Correct Search heads provide visibility into license usage and violations. Storing internal licensing logs here ensures administrators can query and monitor license activity efficiently.

Reference
Splunk Docs – About Splunk licenses

Search dashboards in the Monitoring Console indicate that the distributed deployment is approaching its capacity. Which of the following options will provide the most search performance improvement?


A. Replace the indexer storage to solid state drives (SSD).


B. Add more search heads and redistribute users based on the search type.


C. Look for slow searches and reschedule them to run during an off-peak time.


D. Add more search peers and make sure forwarders distribute data evenly across all indexers





C.
  Look for slow searches and reschedule them to run during an off-peak time.

Explanation:
When the Monitoring Console shows that a distributed Splunk deployment is approaching capacity, the most effective way to improve search performance is to reduce contention during peak usage. The recommended best practice is to identify slow or resource-intensive searches and reschedule them to run during off-peak hours. This directly reduces the load on indexers and search heads during busy times, freeing resources for interactive searches and improving overall performance.

Why the other options are less effective

A. Replace the indexer storage to SSD
→ Not the best option While SSDs can improve indexing and search I/O performance, they do not address the root cause of capacity issues in distributed search environments. The bottleneck is often concurrent search load, not disk speed.

B. Add more search heads and redistribute users
→ Not effective Adding search heads does not improve search performance if the indexers are already at capacity. Search heads only coordinate searches; the heavy lifting is done by indexers.

D. Add more search peers and distribute data evenly
→ Useful for indexing throughput, not search performance Adding indexers improves ingestion and storage scalability, but search performance bottlenecks are usually caused by concurrent searches. Unless the issue is ingestion overload, this will not provide the most immediate search performance improvement.

Reference
Splunk Docs – Troubleshoot search performance
Splunk Docs – Monitoring Console overview

What does the deployer do in a Search Head Cluster (SHC)? (Select all that apply.)


A. Distributes apps to SHC members.


B. Bootstraps a clean Splunk install for a SHC.


C. Distributes non-search related and manual configuration file changes.


D. Distributes runtime knowledge object changes made by users across the SHC.





A.
  Distributes apps to SHC members.

Explanation:
In a Search Head Cluster (SHC), the deployer is a Splunk instance used to distribute apps and certain configuration files to all search head cluster members. It is not part of the cluster itself but acts as a management node.

✅ Correct Answer

A. Distributes apps to SHC members → Correct
The deployer pushes apps (including configurations, lookups, and other non-runtime knowledge objects) to all search head cluster members. This ensures consistency across the cluster.

❌ Incorrect

B. Bootstraps a clean Splunk install for a SHC → Incorrect The deployer does not bootstrap or install Splunk. Each search head must be installed and configured independently before joining the cluster.

C. Distributes non-search related and manual configuration file changes → Incorrect in this context While the deployer does distribute app configurations, it does not handle arbitrary manual changes outside of apps. Manual runtime changes should be avoided because they can cause inconsistencies.

D. Distributes runtime knowledge object changes made by users across the SHC → Incorrect Runtime knowledge objects (such as saved searches, dashboards, and alerts created by users) are replicated automatically across the SHC by the search head cluster captain, not the deployer.

Reference
Splunk Docs – About the deployer
Splunk Docs – Deploy apps and configurations to a search head cluster

Which command is used for thawing the archive bucket?


A. Splunk collect


B. Splunk convert


C. Splunk rebuild


D. Splunk dbinspect





C.
  Splunk rebuild

Explanation:
When a Splunk bucket has been archived (moved to frozen storage), administrators may need to thaw it back into Splunk for searching. The correct command for this process is splunk rebuild. This command is executed after moving the archived bucket into the thawed directory of the index. Running splunk rebuild regenerates the bucket’s metadata and ensures Splunk recognizes it as a valid searchable bucket. Without this step, the thawed bucket will not be searchable because Splunk requires rebuilt metadata structures to properly index and query the data.
This is the only supported command for thawing archive buckets. It is explicitly documented in Splunk’s official administration guides, making it the authoritative answer for exam scenarios and operational troubleshooting.

Why the other options are incorrect

A. splunk collect → Incorrect
This command is used to collect search results into a summary index. It is designed for accelerating searches by storing pre-computed results, not for thawing archived buckets. While useful for performance optimization, it has no role in bucket management or thawing.

B. splunk convert → Incorrect
This command is used for converting reports or data formats, such as transforming search results into CSV or XML. It does not interact with buckets or storage directories. The name may sound plausible in the context of “converting” archived data, but it is unrelated to thawing.

C. splunk rebuild → Correct
This is the supported command for thawing archive buckets. After moving the bucket into the thawed directory, splunk rebuild regenerates metadata and makes the bucket searchable again. This is the precise answer to the question.

D. splunk dbinspect → Incorrect
This command inspects the contents of index buckets, providing details such as bucket size, event count, and time ranges. It is useful for troubleshooting and validation but does not thaw or rebuild buckets. It only reports information about existing buckets.

References
Splunk Docs – Rebuild buckets
Splunk Docs – dbinspect command

Which of the following security options must be explicitly configured (i.e. which options are not enabled by default)?


A. Data encryption between Splunk Web and splunkd.


B. Certificate authentication between forwarders and indexers.


C. Certificate authentication between Splunk Web and search head.


D. Data encryption for distributed search between search heads and indexers





B.
  Certificate authentication between forwarders and indexers.

Explanation:
Splunk provides several built‑in security mechanisms, but not all are enabled by default. Some require explicit configuration by administrators.

A. Data encryption between Splunk Web and splunkd
→ Enabled by default Splunk Web communicates with splunkd over HTTPS by default, so encryption is already in place. No explicit configuration is required unless you want to customize certificates.

B. Certificate authentication between forwarders and indexers
→ Must be explicitly configured (Correct) By default, forwarders and indexers communicate over TCP without mutual certificate authentication. If you want secure communication with TLS and certificate validation, you must explicitly configure it in inputs.conf and outputs.conf along with proper certificates. This is not enabled automatically.

C. Certificate authentication between Splunk Web and search head
→ Enabled by default Splunk Web uses HTTPS to connect to the search head. While you can replace the default Splunk certificates with custom ones, encryption and certificate usage are already enabled.

D. Data encryption for distributed search between search heads and indexers
→ Enabled by default Distributed search communication between search heads and indexers is encrypted by default using Splunk’s built‑in certificates. Administrators can replace these with custom certificates, but encryption is already active.

Reference
Splunk Docs – Secure Splunk communications with SSL/TLS

Splunk Docs – Configure forwarder to indexer encryption

Because Splunk indexing is read/write intensive, it is important to select the appropriate disk storage solution for each deployment. Which of the following statements is accurate about disk storage?


A. High performance SAN should never be used.


B. Enable NFS for storing hot and warm buckets.


C. The recommended RAID setup is RAID 10 (1 + 0).


D. Virtualized environments are usually preferred over bare metal for Splunk indexers.





C.
  The recommended RAID setup is RAID 10 (1 + 0).

Explanation:
Splunk indexing is read/write intensive, so disk I/O performance is critical. The recommended disk storage solution for indexers is RAID 10 (1+0) because it provides both high performance and redundancy. RAID 10 combines mirroring and striping, ensuring fast writes and reads while protecting against disk failures. This balance makes it the best practice for Splunk indexer storage.

Why the other options are not accurate

A. High performance SAN should never be used → Incorrect
Splunk does not forbid SAN usage. In fact, SANs can be used if they deliver sufficient throughput and low latency. The statement “should never be used” is misleading. SANs are acceptable if properly tuned, though local RAID 10 is generally preferred for performance.

B. Enable NFS for storing hot and warm buckets → Incorrect
Splunk explicitly advises against using NFS for hot and warm buckets because NFS introduces latency and can cause performance degradation. Hot and warm buckets should reside on local high‑performance storage. NFS may be acceptable for cold or frozen buckets, but not for hot/warm.

D. Virtualized environments are usually preferred over bare metal for Splunk indexers → Incorrect
Splunk recommends bare metal for indexers whenever possible because virtualization introduces overhead and can limit disk I/O performance. Virtualization may be used in some environments, but it is not preferred over bare metal for indexers.

Reference
Splunk Docs – Splunk Enterprise hardware requirements
Splunk Docs – Indexer storage recommendations

A search head has successfully joined a single site indexer cluster. Which command is used to configure the same search head to join another indexer cluster?


A. splunk add cluster-config


B. splunk add cluster-master


C. splunk edit cluster-config


D. splunk edit cluster-master





B.
  splunk add cluster-master

Explanation:
In Splunk’s distributed architecture, a search head can be configured to connect to multiple indexer clusters. Each cluster is managed by a cluster master (also called cluster manager). When a search head has already joined one cluster and needs to join another, the correct command is:

splunk add cluster-master
This command explicitly registers a new cluster master with the search head. Once added, the search head can communicate with multiple indexer clusters, allowing searches to span across them. This is the supported and documented method for expanding search head connectivity to additional clusters.

Why the other options are not correct

A. splunk add cluster-config → Incorrect
Splunk does not provide a command called add cluster-config. The distractor is designed to mislead by sounding plausible, but the actual syntax is add cluster-master. Configuration of clusters is handled through the cluster master, not a generic “cluster-config” command.

B. splunk add cluster-master → Correct
This is the valid command. It adds a new cluster master definition to the search head, enabling it to join another indexer cluster. Each cluster master manages replication, search factor, and peer coordination. The search head must know about each cluster master to query across multiple clusters.

C. splunk edit cluster-config → Incorrect
Similar to option A, this command does not exist. Splunk does not use “cluster-config” as a subcommand. Editing cluster configuration is done through edit cluster-master if you want to modify an existing cluster master entry, not through a nonexistent cluster-config.

D. splunk edit cluster-master → Incorrect
in this context While this is a valid command, it is used to modify an existing cluster master configuration already registered with the search head. The question specifically asks about joining another cluster, which requires adding a new cluster master, not editing an existing one. Therefore, the correct action is add cluster-master.

References
Splunk Docs – Configure search heads to connect to multiple indexer clusters
Splunk Docs – splunk add cluster-master command

Which search head cluster component is responsible for pushing knowledge bundles to search peers, replicating configuration changes to search head cluster members, and scheduling jobs across the search head cluster?


A. Master


B. Captain


C. Deployer


D. Deployment server





B.
  Captain

Explanation:
In a Search Head Cluster (SHC), the Captain is the elected leader among the search head members. It has several critical responsibilities that ensure cluster consistency and coordination:
Pushes knowledge bundles to search peers → The captain sends search artifacts (knowledge bundles) to indexers so they can execute searches correctly.
Replicates configuration changes to SHC members → The captain ensures that non-runtime configurations are synchronized across all search head members.
Schedules jobs across the SHC → The captain manages search job distribution, ensuring searches are balanced and coordinated across the cluster.

Why the other options are not correct

A. Master → Incorrect
In Splunk terminology, the “master” (now called cluster manager) applies to indexer clustering, not search head clustering. It manages replication and search factors for indexers, not search head responsibilities.

C. Deployer → Incorrect
The deployer is used to distribute apps and configurations to search head cluster members. It does not handle runtime replication, job scheduling, or knowledge bundle distribution.

D. Deployment server → Incorrect
The deployment server manages app distribution to forwarders and other Splunk instances. It is not part of search head clustering and has no role in knowledge bundle distribution or job scheduling.

References
Splunk Docs – Search head cluster captain responsibilities
Splunk Docs – Deploy apps and configurations to a search head cluster

Splunk Enterprise platform instrumentation refers to data that the Splunk Enterprise deployment logs in the _introspection index. Which of the following logs are included in this index? (Select all that apply.)


A. audit.log


B. metrics.log


C. disk_objects.log


D. resource_usage.log





C.
  disk_objects.log

D.
  resource_usage.log

Explanation:
Splunk Enterprise platform instrumentation refers to internal telemetry data that Splunk logs into the _introspection index. This index is specifically designed to capture performance and resource usage information about the Splunk platform itself.

C. disk_objects.log → Correct
This log records information about disk usage, including objects stored on disk. It helps administrators monitor how Splunk consumes disk resources, which is critical for capacity planning and troubleshooting.

D. resource_usage.log → Correct
This log captures CPU, memory, and other system resource usage by Splunk processes. It provides visibility into how Splunk is utilizing system resources, which is essential for performance tuning.

Why the other options are not correct

A. audit.log → IncorrectThe audit.log records user activity and administrative actions. It is stored in the _audit index, not _introspection.

B. metrics.log → Incorrect The metrics.log contains general Splunk performance metrics and is written to the _internal index, not _introspection.

References
Splunk Docs – About the introspection index
Splunk Docs – Monitor Splunk resource usage

Which of the following statements describe a Search Head Cluster (SHC) captain? (Select all that apply.)


A. Is the job scheduler for the entire SHC.


B. Manages alert action suppressions (throttling).


C. Synchronizes the member list with the KV store primary.


D. Replicates the SHC's knowledge bundle to the search peers.





A.
  Is the job scheduler for the entire SHC.

D.
  Replicates the SHC's knowledge bundle to the search peers.

Explanation:
The Search Head Cluster (SHC) captain is the elected leader among the search head members. It has specific responsibilities that ensure coordination and consistency across the cluster:

A. Is the job scheduler for the entire SHC → Correct The captain manages search job scheduling across the cluster, ensuring searches are distributed and executed efficiently.

D. Replicates the SHC's knowledge bundle to the search peers → Correct The captain pushes the knowledge bundle (containing search artifacts, configurations, and knowledge objects) to the indexers (search peers) so they can execute searches correctly.

Why the other options are not correct

B. Manages alert action suppressions (throttling) → Incorrect Alert throttling is handled at the search head level, not specifically by the captain. The captain does not manage suppressions.

C. Synchronizes the member list with the KV store primary → Incorrect The KV store primary handles synchronization of KV store data across members. The captain does not manage KV store member list synchronization.

References
Splunk Docs – About search head clustering

In which phase of the Splunk Enterprise data pipeline are indexed extraction configurations processed?


A. Input


B. Search


C. Parsing


D. Indexing





C.
  Parsing

Explanation

Splunk’s data pipeline is the sequence of phases through which incoming data flows before it becomes searchable. The phases are Input → Parsing → Indexing → Search. Each phase has a distinct role, and understanding where specific configurations apply is critical for exam success and operational troubleshooting.
The question asks specifically about indexed extraction configurations. These are configurations such as INDEXED_EXTRACTIONS, line breaking, timestamp recognition, and index-time field extractions. They are processed during the Parsing phase.

Why Parsing is Correct
The Parsing phase is where Splunk transforms raw data into discrete events. During this stage:
Splunk applies line breaking rules to determine event boundaries.
Timestamp extraction occurs to assign time values to events.
Indexed extractions are processed, which means Splunk applies index-time field extractions defined in props.conf or transforms.conf..
Metadata such as host, source, and sourcetype are finalized.
Because indexed extractions must be applied before data is written to disk, they logically belong in the parsing phase. Once parsing is complete, the events are ready to be committed to storage during the indexing phase.

Why the Other Options Are Not Correct

A. Input → Incorrect
The input phase is responsible for collecting raw data from sources such as forwarders, files, or network streams. At this point, Splunk only ingests raw data; no indexed extractions are applied. The input phase simply delivers data into the pipeline.

B. Search → Incorrect
The search phase applies search-time field extractions, lookups, and knowledge objects. This is distinct from indexed extractions, which occur earlier. Search-time extractions are flexible and can be modified without reindexing, but they are not part of the parsing phase. Indexed extractions must be applied before data is stored.

C. Parsing → Correct
As explained, parsing is where Splunk breaks raw data into events, applies indexed extractions, and prepares metadata. This is the correct answer.

D. Indexing → Incorrect
The indexing phase writes parsed events into buckets on disk. At this point, the data is already structured. Indexed extractions cannot be applied here because the parsing phase has already determined event boundaries and metadata. Indexing is about storage, not extraction.

References
Splunk Docs – Indexed extractions


Page 4 out of 14 Pages
Previous