An administrator needs to enable Self Service Restore (SSR) on a Nutanix-hosted VM.
What is required for the administrator to complete this task?
A. A Storage Container
B. Nutanix Guest Tools
C. Prism Central
D. VMWare Tools
Explanation:
The Self Service Restore (SSR) feature allows end-users to recover specific files or folders from a VM snapshot without requiring administrative intervention. To facilitate this, the Nutanix cluster must communicate directly with the guest operating system's file system to mount the snapshot as a local drive. This specialized interaction is handled by a suite of software agents and drivers that bridge the gap between the Nutanix Acropolis hypervisor and the VM’s internal operations.
Correct Option:
B. Nutanix Guest Tools (NGT)
NGT Requirement:
Nutanix Guest Tools is a software bundle installed on VMs that enables advanced functionality. SSR relies specifically on the Nutanix Guest Agent (NGA) included in this package.
Functionality:
Once NGT is installed and enabled, it allows the VM to communicate with the Nutanix CVM to discover, mount, and unmount snapshots directly within the guest OS via a web console or command line.
Incorrect Options:
A. A Storage Container:
While all VMs reside in a storage container, simply having one does not enable guest-level features like SSR. The container manages physical storage properties (replication factor, compression), but SSR specifically requires a guest-side agent to handle the mounting of virtual disks from snapshots.
C. Prism Central:
Although Prism Central is used for managing multiple clusters and advanced services like Flow or Calm, SSR is a core Nutanix feature managed at the VM level. While you can trigger NGT installation from Prism Central, it is the NGT software itself, not the management console, that is the technical requirement.
D. VMWare Tools:
This is a proprietary suite for VMs running on ESXi. While Nutanix supports ESXi, the "Self Service Restore" feature discussed in Nutanix training specifically refers to the Nutanix-native SSR capability which requires NGT, regardless of the hypervisor, to interact with the Nutanix distributed storage fabric.
Reference:
Nutanix Support Portal: "Nutanix Guest Tools (NGT) Installation" and "Self-Service Restore" sections.
Nutanix University: NCA 6.10 Course Module – Managing Nutanix Nodes and VMs.
An administrator is performing AHV upgrades on an 8-node cluster configured with Redundancy Factor 3.
How many nodes would be placed into maintenance mode simultaneously?
A. 1
B. 2
C. 3
D. 4
Correct Option:
A. 1
Rolling Upgrade Strategy: Nutanix upgrades nodes one at a time. The system places a single node into maintenance mode, evacuates its virtual machines to the remaining healthy nodes, performs the upgrade, and reboots. Only after that node is back online and its storage services (CVM) have fully synchronized and reported a "Healthy" status does the process move to the next node.
Preserving Fault Tolerance: By upgrading only one node at a time, the cluster ensures it maintains the maximum possible resiliency during the maintenance window. In an RF3 cluster, if one node is being upgraded (effectively "down"), the cluster can still tolerate an additional accidental failure of up to two more nodes without losing data availability. If Nutanix were to upgrade multiple nodes simultaneously, it would significantly narrow this safety margin.
Resource Management: Moving VMs from multiple nodes simultaneously could also lead to resource contention on the remaining hosts. Upgrading one by one ensures a predictable and stable migration path for guest workloads.
Analysis of Incorrect Options:
B. 2 (The common misconception):
It is a frequent mistake to assume that because RF3 allows for 2 node failures, it also allows for 2 simultaneous maintenance modes. While the cluster could technically stay up with two nodes down, Nutanix software prevents this during automated upgrades to ensure "N+2" resiliency is maintained as much as possible.
C. 3:
Placing three nodes in maintenance would leave the cluster at high risk. Even in an RF3 scenario, having three nodes down simultaneously (especially during an active software change like an upgrade) is not a supported or automated workflow.
D. 4:
This would represent half of an 8-node cluster being offline, which would lead to a loss of quorum and cluster unavailability.
Key Takeaway for the Exam:
Regardless of the Redundancy Factor (RF2 or RF3), the Nutanix 1-Click Upgrade and LCM (Life Cycle Manager) processes always default to a one-node-at-a-time rolling sequence to ensure maximum uptime and data protection.
What is the purpose of Life Cycle Management (LCM)?
A. To optimize server performance
B. To manage network configurations
C. To simplify software and firmware upgrades
D. To automate application deployment
Explanation:
This question tests the fundamental understanding of the Life Cycle Manager (LCM) tool's primary role within the Nutanix software stack. LCM is a core component designed to streamline a specific, critical operational task for infrastructure administrators, moving away from complex, manual processes.
Correct Option:
C. To simplify software and firmware upgrades:
LCM's express purpose is to provide a unified, non-disruptive, and automated framework for upgrading all software and firmware components across the Nutanix ecosystem. This includes AOS, AHV, hypervisor, BIOS, BMC, and drive firmware, significantly reducing complexity and risk.
Incorrect Option:
A. To optimize server performance:
While certain upgrades applied via LCM may improve performance, this is not LCM's primary function. Performance optimization is handled by other features like Intelligent Operations or specific hypervisor settings.
B. To manage network configurations:
Network configuration is managed through Prism (for host networks) or separate networking solutions. LCM does not handle IP address assignments, VLANs, or routing.
D. To automate application deployment:
Application deployment and lifecycle management are handled by different platforms, such as Calm for automation and self-service or Kubernetes for containerized applications.
Reference:
The Nutanix Life Cycle Manager (LCM) overview documentation, which defines LCM as the tool for streamlined, one-click upgrades of the entire software and firmware stack.
When is deduplication recommended?
A. Server workloads
B. Linked Clone VMs
C. Full clone VMs
D. Cold data
Explanation:
In a Nutanix environment, the Distributed Storage Fabric (DSF) uses various data efficiency techniques to optimize capacity. Deduplication works by identifying identical data blocks across the cluster and replacing them with pointers to a single master copy. However, because Nutanix's native cloning mechanism (used for Snapshots and Linked Clones) already uses a redirect-on-write metadata-based approach to avoid duplicating data, enabling the deduplication engine on those workloads provides almost no additional benefit while consuming extra CPU and Memory on the Controller VMs (CVMs).
Deduplication is specifically recommended for workloads that contain a high degree of redundant data that the storage system doesn't already "know" is identical—most notably, Full Clone VMs.
Correct Option:
C. Full clone VMs
Why it's recommended: When you create a "Full Clone," the hypervisor typically copies every block of the source VM to a new destination. This creates massive amounts of duplicate data (e.g., the same Windows OS files replicated 100 times). The Nutanix Elastic Deduplication Engine scans these blocks, recognizes the duplicates, and reclaims that space.
Workload Example: Persistent VDI desktops where each user has their own full VM copy are the primary use case for deduplication.
Incorrect Options:
A. Server workloads:
Nutanix generally recommends disabling deduplication for general-purpose server workloads and business-critical applications (like SQL or Oracle). These workloads typically have very little redundant data, and the overhead of fingerprinting and deduplicating blocks can impact performance for a negligible capacity gain.
B. Linked Clone VMs:
Linked clones (and Nutanix native clones) are inherently efficient. When you clone a VM in AHV, Nutanix simply creates new metadata pointers to the existing data blocks. Since the blocks aren't actually copied, there is no "duplicate" data for the deduplication engine to find.
D. Cold data:
While deduplication can be used on cold data, the primary Nutanix recommendation for "Cold Data" (data not accessed for 7+ days) is Erasure Coding (EC-X). Erasure coding is much more effective at saving space on unique, non-redundant data (like archives or backups) by reducing the replication overhead (RF2/RF3) rather than looking for duplicate blocks.
Reference:
Nutanix Support Portal: "Data Efficiency Tech Note" – Section on Deduplication Recommendations.
Nutanix University: NCA 6.10 Course – Section on "Storage Efficiency (Compression, Dedupe, EC-X)."
When multiple Alert policies are applied to an entity, which will take precedence?
A. Policy are applied to a specific entity type.
B. Policy are applied to all entities of an entity type.
C. Policy are applied to an entity type in a category.
D. Policy are applied to an entity type in a cluster.
Explanation:
In Nutanix Prism Central, administrators can create custom alert policies to monitor specific metrics. Because these policies can overlap—for example, a global policy for all VMs and a specific policy for a single database VM—Nutanix uses a precedence hierarchy to determine which threshold or notification setting applies. The system is designed to prioritize the most granular policy over broader, more general ones. This ensures that specialized workloads with unique performance characteristics do not trigger unnecessary alerts based on generic cluster-wide settings.
Correct Option:
A. Policy are applied to a specific entity type.
Note: In the context of this specific exam question's phrasing, "specific entity type" refers to a policy targeted at a single specific instance (e.g., one specific VM or one specific Host).
Highest Granularity: A policy applied directly to an individual entity is the most specific configuration possible. Nutanix honors this specific setting and ignores any higher-level global or category-based policies that might otherwise apply to that entity.
Operational Logic: If you set a 90% CPU alert for "VM-A" but have a 75% alert for "All VMs," only the 90% threshold is evaluated for VM-A.
Incorrect Options:
B. Policy are applied to all entities of an entity type:
This is the lowest level of precedence. These are "Global Policies" that act as a catch-all. They only take effect if no other specific, category-based, or cluster-based policies exist for the entity.
C. Policy are applied to an entity type in a category:
This is the second-highest level of precedence. While more specific than a cluster-wide policy, it is less specific than a policy targeting a single individual entity. It is ideal for grouping similar workloads (e.g., all "Production" VMs).
D. Policy are applied to an entity type in a cluster:
This is the third-highest level of precedence. It allows you to customize alerts for all entities of a certain type within one specific physical cluster, providing more detail than a global policy but less than a category or individual entity policy.
Reference:
Nutanix Support Portal: "Prism Central Alert Reference" – Section on Overlapping Policies.
Nutanix University: NCA 6.10 Course – Monitoring and Reporting Module.
A Veeam Backup appliance, which uses iSCSI, must be deployed into an AHV-based cluster.
What must be configured to allow Veeam to connect to the Nutanix cluster?
A. Network Segmentation
B. Nutanix Objects
C. Prism Central
D. Data Services IP
Correct Component:
D. Data Services IP (DSIP)
The Gateway for iSCSI: The Data Services IP is a virtual IP (vIP) assigned to the Nutanix cluster. It serves as the iSCSI Target Portal. Without this IP configured in Prism Element, external initiators (like the Veeam appliance) cannot discover the Nutanix Volume Groups or the storage presentation.
Veeam Requirement: Veeam’s Nutanix AHV Plug-in explicitly requires the DSIP to be configured to perform backup and restore operations, as it uses iSCSI to mount virtual disks during the "Hot-Add" or "Direct Mode" processes.
Availability: Because it is a cluster-wide vIP, it is highly available; if the CVM currently hosting the DSIP fails, the IP automatically migrates to another healthy CVM in the cluster.
Analysis of Other Options:
A. Network Segmentation (Commonly mislabeled as the answer):
While you can use Network Segmentation to isolate iSCSI traffic onto a separate physical or virtual network for security and performance, it is not a requirement for the connection to function. You can run iSCSI over the default management network as long as the Data Services IP is set.
B. Nutanix Objects:
This is Nutanix's S3-compatible object storage solution. While Veeam can use Nutanix Objects as a Backup Repository (target), it is not used as the connection method for the Veeam appliance to back up AHV VMs via iSCSI.
C. Prism Central:
Prism Central is used for multi-cluster management and high-level orchestration. While it manages the Veeam AHV Plug-in registration in some versions, the specific iSCSI connection is a local cluster function handled by the Data Services IP in Prism Element.
Key Takeaway for the Exam:
If you encounter this question on the NCA 6.10 exam, remember that Data Services IP is the functional requirement to enable the iSCSI protocol for external clients.
Which storage container option reduces the available storage to other containers?
A. Advertised Capacity
B. Erasure Coding
C. Capacity Deduplication
D. Reserved Capacity
Correct Answer:
D. Reserved Capacity
Exclusivity:
Enabling a space reservation ensures that a specific amount of storage is exclusively allocated to that container. This space is "taken" from the storage pool immediately.
Impact on Other Containers:
Because this space is now dedicated to one container, it is no longer available to any other storage container in the cluster, even if the reserved space is currently empty (not yet used by VMs).
Guaranteed Availability:
This acts like a "hard" allocation, guaranteeing that the container will always have at least that much space available regardless of how much data other containers write.
Key Exam Tip:
Think of Reserved Capacity as a "minimum guarantee" and Advertised Capacity as a "maximum limit." Only the reservation physically subtracts from the pool's shared free space before data is even written.
Within Prism Central, which Compute and Storage section will allow an administrator to upload a Windows ISO file?
A. Calatog Items
B. Templates
C. Images
D. OVAS
Correct Option:
C. Images
Centralized Library: The Images section (found under Compute & Storage in the Prism Central navigation menu) is the primary location for uploading, managing, and distributing system images.
Supported Formats: You can upload various file types including ISO (for OS installation), VMDK, VHDX, and QCOW2 (for virtual disks).
Placement Policies: Once an ISO is uploaded to the Images library, you can use Image Placement Policies to control which physical clusters have access to that file, ensuring it is available when you create a new VM on a specific cluster.
Analysis of Other Options:
A. Catalog Items:
These are specifically used in Nutanix Self-Service (formerly Calm) to store blueprints and pre-configured application templates for end-user provisioning.
B. Templates:
This section is used to manage VM Templates, which are "golden images" of fully installed virtual machines that can be used for rapid deployment. They are different from raw ISO installation files.
D. OVAs:
While Prism Central supports importing Open Virtual Appliances (OVAs), this is a specific package format containing a pre-configured VM. Raw Windows ISO files are handled by the Image service, not the dedicated OVA management section.
Key Workflow:
To upload your Windows ISO, go to Compute & Storage > Images, click Add Image, and choose Add File to browse your local workstation for the ISO file.
Which Nutanix Support case priority level indicates that the system is available but experiencing issues that directly impact productivity?
A. P1
B. P2
C. P3
D. P4
Explanation:
Nutanix Support uses a structured priority system to categorize incoming cases based on the severity of the business impact and the availability of the system. This ensures that technical resources are allocated where they are most needed. The four main priority levels (P1 to P4) define the urgency of the situation. While a total outage (system down) is the highest priority, issues where the system is still functional but performance or specific features are degraded enough to hinder work are treated as critical high-priority incidents. This allows for rapid intervention even when the environment hasn't completely halted.
Correct Option:
B. P2 (Priority 2 - Critical)
System Status: The system is available and operational, but it is experiencing significant issues.
Business Impact: These issues have a direct impact on productivity, often described as a "Major Inconvenience." This includes scenarios like significant performance degradation, the failure of a major feature, or persistent errors that prevent users from completing their primary tasks.
SLA: Under a Production Support contract, Nutanix targets a response within 4 hours for a P2 case.
Incorrect Options:
A. P1 (Priority 1 - Emergency):
This priority is reserved for situations where the system is not available and productivity has completely halted. It is the "highest" urgency level and includes all data corruption issues. The defining factor for P1 is that the product is unusable in its current state.
C. P3 (Priority 3 - Normal):
This level applies to systems that are having occasional issues but where the issue has not greatly affected productivity. It is described as a "Minor Inconvenience." The system is functional, and work can continue, even if some troubleshooting or maintenance is required.
D. P4 (Priority 4 - Low):
This is the lowest priority level, used for General Requests or informational questions. This includes questions about documentation, new account setups, or feature requests. It indicates there is no current impact on the system's performance or user productivity.
Reference:
Nutanix Support Portal: "Support Program Guide" – Priority Definitions.
Nutanix University: NCA 6.10 Course – Section on "Nutanix Support Services and Case Management."
On a newly-deployed AHV cluster, what is the default virtual switch (vs0) uplink bond type?
A. Balance-SLB
B. Active-Backup
C. Balance-TCP
D. No Uplink Bond
Explanation:
On a newly-deployed Nutanix AHV cluster, the system automatically configures the networking architecture using Open vSwitch (OVS). The default virtual switch, named vs0, is mapped to the physical bridge br0. This switch contains all physical network interfaces available on the host by default. To ensure immediate connectivity and compatibility with the widest range of physical network environments, Nutanix uses a conservative load-balancing policy. This policy ensures that even if the upstream physical switches are not configured with advanced features like Link Aggregation (LAG) or LACP, the Nutanix host can still maintain a stable connection for management and storage traffic.
Correct Option:
B. Active-Backup
Default Configuration:
This is the factory-default bond mode for vs0. In this state, only one physical interface in the bond is active at any given time.
Operational Behavior:
All traffic (CVM, Hypervisor, and User VMs) is transmitted over a single primary adapter. The other adapter(s) remain in a standby state and only take over the traffic if the active link or its upstream port fails.
Compatibility:
This mode is recommended for its simplicity because it requires zero configuration on the physical Top-of-Rack (ToR) switches and works seamlessly with two independent switches.
Incorrect Options:
A. Balance-SLB:
While "Active-Active with MAC pinning" (Balance-SLB) is a popular alternative that allows for better bandwidth utilization across multiple adapters without LACP, it is not the default. It must be manually enabled via the Prism Virtual Switch UI or CLI after the cluster is deployed.
C. Balance-TCP:
This mode requires Link Aggregation Control Protocol (LACP) to be configured on both the AHV host and the physical switch. Because it requires specific upstream switch coordination, it cannot be the default setting, as it would cause a loss of connectivity during initial deployment if the physical switch was not pre-configured.
D. No Uplink Bond:
This is a configuration used when a virtual switch has only one physical uplink or no uplinks at all. By default, Nutanix clusters are deployed with multiple physical NICs bonded together for redundancy, making "Active-Backup" the standard rather than no bond.
Reference:
Nutanix Support Portal: "AHV Networking Best Practices Guide" – Load Balancing in Bond Interfaces.
Nutanix University: NCA 6.10 Course Module – Section on AHV Networking and Virtual Switches.
When configuring a physical network switch in Prism Element, what information is required?
A. DNS Configuration
B. NTP Configuration
C. SMTP Configuration
D. SNMP Configuration
Explanation:
Prism Element provides a feature called Network Visualization (specifically for AHV clusters) that allows administrators to see the physical and virtual network topology. To enable this "top-of-rack" visibility, Prism needs to communicate with the physical switches to pull hardware information, port status, and traffic statistics. This communication is established by creating a Network Switch Configuration in the settings menu. By linking the Nutanix cluster to the physical switch management IP, administrators can monitor link health and identify potential network bottlenecks directly from the Nutanix dashboard.
Correct Option:
D. SNMP (Simple Network Management Protocol) Configuration
Monitoring and Discovery: SNMP is the industry-standard protocol used by Nutanix to query physical switches. When adding a switch in Prism Element, you must provide an SNMP Profile (Version 2c or 3) containing the community string or security credentials.
Data Retrieval: This configuration allows the Nutanix CVMs to fetch critical data such as the Switch ID, port speeds, and VLAN information, which is then used to populate the Network Visualizer and Hardware table views.
Incorrect Options:
A. DNS Configuration:
While DNS is a cluster-wide prerequisite for resolving hostnames, it is not the protocol used to pull management data from a physical switch. Nutanix usually uses the switch's IP address directly in the switch configuration window, though DNS helps in broader environment connectivity.
B. NTP Configuration:
Network Time Protocol (NTP) is vital for keeping all Nutanix nodes and the hypervisor in sync for log consistency. However, configuring NTP in Prism Element does not facilitate the specific management or data-gathering connection required to "add" a physical switch to the inventory.
C. SMTP Configuration:
Simple Mail Transfer Protocol (SMTP) is used by Nutanix to send email alerts and "Pulse" heartbeats to Nutanix Support. While it is an important management setting, it plays no role in the discovery or monitoring of physical network switch hardware.
Reference:
Nutanix Support Portal: "Prism Web Console Guide" – Configuring a Network Switch.
Nutanix University: AHV Networking and Visualizer Module.
Which storage protocol is supported for a datastore when using ESXi as the hypervisor within a Nutanix cluster?
A. iSCSI
B. SMB
C. LVM
D. NFS
Correct Option:
D. NFS (Network File System)
Native Integration: Nutanix presents its usable storage to the VMware vSphere environment as an NFS datastore. This is the standard and recommended way for ESXi hosts in a Nutanix cluster to access virtual machine files.
Version Support: Nutanix specifically supports NFS version 3 with ESXi.
Performance and Simplicity: Because each host has a local Controller VM (CVM), the NFS traffic primarily stays local to the host (using the internal vSwitch). This eliminates the complexity of traditional storage networking while providing high-performance access to VMDKs.
Analysis of Incorrect Options:
A. iSCSI:
While Nutanix does support iSCSI via a feature called Nutanix Volumes, it is typically used for in-guest storage (attaching a drive directly to a VM's OS) or for connecting external physical servers to the Nutanix cluster. It is not the protocol used by the ESXi hypervisor itself to mount its primary VM datastores
B. SMB (Server Message Block):
SMB (specifically SMB 3.0) is the storage protocol used when Microsoft Hyper-V is the hypervisor on a Nutanix cluster. It is not supported as a datastore protocol for VMware ESXi.
C. LVM (Logical Volume Management):
LVM is a tool used within Linux operating systems to manage disk drives and similar mass storage devices. It is not a network storage protocol and is not used by ESXi to mount Nutanix-provided datastores.
Reference:
Nutanix Support Portal: "Nutanix Configuration for VMware vSphere" – Storage section.
Nutanix University: NCA 6.10 Course – Section on "Nutanix Storage Services and Hypervisor Integration."
| Page 2 out of 7 Pages |
| Previous |