For dark site clusters, what should be downloaded prior to running an LCM Inventory or updates?
A. Nutanix Foundation
B. Nutanix Compatibility Bundle
C. Nutanix Prism Central
D. Nutanix Foundation Platform
Explanation:
A "dark site" is a Nutanix environment that has no outgoing internet connectivity. Life Cycle Manager (LCM) requires internet access to perform its core functions:
Inventory: Downloading the latest catalogs of firmware, drivers, and software versions from Nutanix to compare against what is installed in the cluster.
Updates:
Downloading the actual firmware and software files to perform the updates.
For a dark site, this internet access is impossible. Therefore, the administrator must manually prepare by downloading the Nutanix Compatibility Bundle from a system that does have internet access. This bundle is a single, downloadable file from the Nutanix Portal that contains the entire catalog of compatible firmware, drivers, and software versions that LCM needs. After downloading this bundle, it is uploaded directly to LCM via Prism. This provides LCM with the necessary data to perform inventory and update operations without requiring the cluster itself to connect to the internet.
Detailed Analysis of Why the Other Options Are Incorrect:
A. Nutanix Foundation:
Foundation is a tool used for the initial deployment and imaging of Nutanix nodes to form a cluster. It is not used for the ongoing lifecycle management (LCM) of an already running cluster. Its purpose is to get the cluster started, not to update it.
C. Nutanix Prism Central:
Prism Central is the multi-cluster management platform. While it has its own LCM process, the question is about a fundamental prerequisite for running an LCM operation (inventory or update) on a cluster. You do not download Prism Central itself to enable LCM; in fact, Prism Central often relies on the same Compatibility Bundle for its own updates in a dark site.
D. Nutanix Foundation Platform:
This is a distractor. While "Foundation" is a valid product, "Foundation Platform" is not the standard term for the required component. The precise and official name for the file needed is the Nutanix Compatibility Bundle.
Reference:
The process for managing dark site clusters is explicitly documented in the Nutanix Life Cycle Management guides. The official Nutanix documentation, such as the "LCM for Dark Sites" or "Upgrading AOS" articles, consistently instructs administrators to download the Nutanix Compatibility Bundle from the Nutanix Support Portal and import it into LCM via Prism Element or Prism Central as the mandatory first step for any inventory or update operation in an air-gapped environment. This bundle is the offline substitute for LCM's internet connection.
An administrator needs to automate the migration of a VM from ESXi to AHV.
Which utility should the administrator use?
A. Move
B. acli
C. Flow
D. Ncli
Explanation:
When an administrator needs to migrate virtual machines (VMs) from VMware ESXi to Nutanix AHV, Nutanix provides a specialized tool designed specifically for this purpose — called Nutanix Move.
Therefore, the correct answer is A. Move.
Overview of Nutanix Move
Nutanix Move is an automated migration utility that simplifies and streamlines the process of moving VMs between different virtualization platforms. It is particularly useful for organizations transitioning from legacy hypervisors (like VMware ESXi or Microsoft Hyper-V) to Nutanix AHV (Acropolis Hypervisor).
The Move utility automates all critical stages of the migration process, such as:
Capturing VM configuration details from the source hypervisor.
Transferring and converting disk images to AHV-compatible format.
Creating equivalent VMs on the AHV cluster.
Managing network mapping, disk mapping, and power-on operations.
Nutanix Move supports both online (minimal downtime) and offline migrations, ensuring flexibility in production environments.
Key Features of Nutanix Move
Cross-Hypervisor Compatibility:
Move supports migrating VMs from:
VMware ESXi → AHV
Hyper-V → AHV
AWS → AHV (for lift-and-shift cloud workloads)
ESXi → AWS (limited use cases)
Automated Migration Workflow:
Move automates complex tasks such as VM configuration replication, disk format conversion, and MAC address re-assignment.
Minimal Downtime:
The tool synchronizes disk data from source to destination in stages, allowing a final short cutover window for the migration to complete with minimal downtime.
Integrated with Prism Central:
Move can be integrated with Prism Central for easy tracking and monitoring of migration activities.
Granular Control:
Administrators can choose to migrate individual VMs or entire groups, customize network mappings, and specify post-migration actions like auto power-on.
Secure and Non-Disruptive:
Move uses encryption for data transfer and does not require any agent installation inside guest VMs.
Typical Migration Workflow (ESXi → AHV)
Deploy Move Appliance:
Download and deploy the Move VM on the AHV cluster (available as a .qcow2 or OVA image).
Configure network connectivity to both the source ESXi environment and the destination AHV cluster.
Register Source and Destination Environments:
Add the ESXi vCenter as the source environment.
Add the AHV cluster (Prism Element or Prism Central) as the target.
Discover and Select VMs:
Move scans the source environment and lists all available VMs for migration.
The administrator selects which VMs to migrate.
Configure Network and Disk Mapping:
Specify network mappings between source port groups and AHV virtual networks (VLANs).
Adjust disk mapping and choose migration mode.
Perform Data Synchronization:
Move begins transferring VM disk data to the AHV cluster.
It performs incremental syncs until the final cutover.
Finalize and Power On:
Once synchronization is complete, Move powers off the VM on ESXi, performs a final sync, and powers it on in AHV.
The VM is now fully functional on AHV.
Why the Correct Answer is A. Move
The Nutanix Move utility is explicitly built for automating VM migrations across hypervisors, especially from VMware ESXi to AHV.
It is a supported, secure, and reliable method endorsed by Nutanix and widely used in real-world migration projects.
In contrast, other tools like acli and ncli are Nutanix command-line utilities for management tasks, not for automated migrations. Flow is a network microsegmentation feature and has no relation to VM migration.
Therefore, only Nutanix Move fits the described scenario.
Why Other Options Are Incorrect
B. acli (Acropolis Command Line Interface):
The acli is used to manage AHV-specific virtual resources, such as creating, deleting, or configuring VMs, networks, and disks.
While it can import disk images manually, it does not automate migration from ESXi to AHV. It’s useful for post-migration tasks but not for performing the migration itself.
C. Flow:
Nutanix Flow is a microsegmentation and security policy management solution integrated with Prism Central. It helps isolate workloads and enforce network security policies.
It has no functionality for VM migration or data transfer between hypervisors.
D. ncli (Nutanix Command Line Interface):
The ncli utility is used for cluster-level operations such as user management, configuration, and licensing. It cannot perform VM migrations or handle virtual disk conversions. It is a management CLI for Nutanix AOS, not a migration tool.
Thus, none of these alternatives provide automated cross-hypervisor migration capabilities like Nutanix Move.
References
Nutanix Move User Guide (v6.10):
“Nutanix Move provides simple, efficient, and fully automated migration of virtual machines from ESXi, Hyper-V, or AWS environments to Nutanix AHV.”
🔗 Nutanix Move Documentation – Official Portal
Nutanix Move Product Page:
“Automate cross-hypervisor migrations from VMware ESXi and Hyper-V to Nutanix AHV with minimal downtime.”
🔗 Nutanix Move Overview
NCA v6.10 Exam Blueprint:
Under “Cluster Operations and Lifecycle Management,” VM migration using Nutanix Move is specifically listed as an exam topic.
An administrator is conducting updates in a Nutanix cluster and is being prompted for handling nonmigratable
VMs.
Which VM type is non-migratable?
A. VMs without NGT
B. VMs marked as an Agent
C. Memory Overcommitted
D. VMs O VMs with attached Volume Groups
Explanation:
During host maintenance operations, such as AOS or hypervisor updates, the Nutanix cluster leverages Live Migration (vMotion for ESXi or Live Migration for AHV) to seamlessly move VMs off a host before it is rebooted.
A VM becomes non-migratable when its state cannot be preserved during a live migration. VMs with attached Volume Groups (VGs) fall into this category. A Volume Group presents a raw, unvirtualized block device (iSCSI LUN) directly to a VM, typically for use cases like Microsoft Failover Clustering or shared-disk databases.
These attached Volume Groups are tied directly to the physical host's storage stack. The live migration process for the hypervisor's memory and virtual disks cannot capture and transfer the state of this direct hardware attachment. Therefore, the VM cannot be live migrated and must be shut down gracefully before the host can enter maintenance mode.
Detailed Analysis of Why the Other Options Are Incorrect:
A. VMs without NGT:
Nutanix Guest Tools (NGT) provide enhanced management capabilities like graceful shutdown, console access, and file-level restore. While highly recommended, the absence of NGT does not prevent live migration. A VM without NGT can still be migrated live; however, if a host reboot is required, the cluster will be forced to perform a hard power-off instead of a graceful, NGT-assisted shutdown.
B. VMs marked as an Agent:
This is not a standard VM classification in Nutanix that affects migratability. It is likely a distractor.
C. Memory Overcommitted:
Memory overcommitment is a resource management technique where the total memory allocated to all VMs on a host exceeds the host's physical RAM. The hypervisor uses techniques like ballooning and memory compression to handle this. A VM on a host with memory overcommitment is still migratable. The live migration process will transfer the VM's active memory pages to the destination host.
Reference:
This behavior is a direct consequence of how Volume Groups and live migration technologies work. The official Nutanix documentation on Volume Groups and Live Migration explicitly states this limitation. For example, the administration guide for Volume Groups notes that VMs using VGs must be powered off for host maintenance because they cannot be live migrated. This is a critical operational consideration for administrators managing applications that rely on Volume Groups, as it requires planned downtime for those specific VMs during cluster updates.
Which bridge does the Controller VM use by default to communicate with the AHV host it runs on?
A. vs0
B. virbro
C. bro
D. bridge0
Explanation:
The virbr0 (Virtual Bridge 0) is a default virtual bridge created by the underlying libvirt/KVM hypervisor technology that AHV is built upon. Its primary purpose is to provide a private, internal network for the host and the virtual machines running on it.
The Controller VM (CVM) uses this bridge by default for its internal management communication with the AHV host. This includes critical tasks like:
Host-to-CVM communication for management commands.
Accessing the host's physical devices and hardware.
The CVM's ability to manage the local host's resources as part of the Nutanix distributed system.
This communication occurs over a private, internal network that is not directly accessible from the external data network
Detailed Analysis of Why the Other Options Are Incorrect:
A. br0:
This is a common name for a data bridge or an external bridge. While br0 is often used for VM traffic that needs to reach the physical network (via a bonded uplink), it is not the default bridge for the internal, privileged communication between the CVM and its local host. The CVM may have an interface on br0 for cluster-wide communication with other CVMs, but the critical host-CVM link is via virbr0.
C. bro:
This is not a standard name for a network bridge in AHV or standard Linux networking and appears to be a distractor or a misspelling.
D. bridge0:
Similar to bro, this is not the conventional name for the default hypervisor bridge. The standard, out-of-the-box bridge created by libvirt for host-internal communication is virbr0.
Reference:
This is a fundamental aspect of the AHV networking stack. The official Nutanix documentation, such as the "AHV Networking Guide" or the "Nutanix Bible" by Steven Poitras, details the default network configuration. It explicitly describes the role of virbr0 as the private management bridge facilitating essential communication between the AHV host and the CVM, distinguishing it from the external data bridges like br0 used for general VM network traffic.
What is the purpose of the OpLog?
A. Persistent write buffer
B. Persistent data storage
C. Global metadata
D. Dynamic read cache
Explanation:
The OpLog (Operation Log) is a fundamental component of the Nutanix storage I/O path. Its primary purpose is to function as a persistent write buffer.
Here is how it works:
Fast Write Acknowledgment:
When a write I/O is received by the Controller VM (CVM), it is first written to the OpLog, which is typically stored on a high-performance SSD (either an SATA/SAS SSD or, more commonly now, an NVMe device).
Data Persistence:
Once the write is durably stored in the OpLog, an acknowledgment is sent back to the VM or application. This means the data is safe from loss due to a power failure or host crash.
Background Processing:
After being acknowledged, the data in the OpLog is asynchronously destaged to the persistent, long-term storage on the extent store (which is spread across all disks in the cluster, including HDDs if present).
This process provides extremely low-latency write performance and data durability without forcing every write to go directly to slower, bulk storage media.
Detailed Analysis of Why the Other Options Are Incorrect:
B. Persistent data storage:
This is incorrect because while the OpLog is persistent, it is not the final destination for data. It is a temporary staging area. The Extent Store is the component responsible for long-term, persistent data storage in a deduplicated and compressed format.
C. Global metadata:
This is incorrect. Global metadata, which keeps track of the location of all data extents across the entire cluster, is managed by a separate, highly available service called the Curator. The OpLog deals with incoming write I/Os, not the metadata mapping of where data is stored.
D. Dynamic read cache:
This is incorrect. The component responsible for caching frequently read data to accelerate read performance is the Extent Cache (or read cache), which resides in the CVM's RAM. The OpLog is exclusively concerned with optimizing the write path.
Reference:
This architecture is a cornerstone of the Nutanix solution and is thoroughly documented in the Nutanix Bible by Steven Poitras. The official Nutanix documentation on the "Distributed Storage Fabric" and "I/O Path" also clearly delineates the roles of the OpLog (for writes), the Extent Store (for persistent storage), and the Extent Cache (for reads). The OpLog's role as a persistent write buffer is critical for delivering the high performance and data protection that Nutanix is known for.
An administrator wants to enable Windows Defender Credential Guard.
What must be enabled when creating the VM?
A. Live Migration
B. UEFI
C. HA
D. Legacy BIOS
Explanation:
Windows Defender Credential Guard is a security feature that uses virtualization-based security (VBS) to isolate and protect critical system credentials. A fundamental prerequisite for VBS, and therefore Credential Guard, is the Unified Extensible Firmware Interface (UEFI) boot architecture.
Legacy BIOS does not support the necessary security features, such as Secure Boot, which are required to establish the trusted platform foundation that Credential Guard relies upon. Therefore, when creating a VM that will have Credential Guard enabled, the administrator must select UEFI as the boot type during the VM creation process in Prism.
Detailed Analysis of Why the Other Options Are Incorrect:
A. Live Migration:
Live Migration (or vMotion for ESXi) is a feature for moving a powered-on VM from one host to another without downtime. While it is a core feature of any hypervisor, it is unrelated to the underlying security and boot firmware requirements of the guest OS. Enabling or disabling Live Migration has no bearing on the ability to enable Credential Guard within the Windows VM.
C. HA:
High Availability (HA) is a cluster-level feature that automatically restarts a VM on a different host if its original host fails. Like Live Migration, this is an infrastructure resilience feature and is completely independent of the guest OS's internal security configurations, such as Credential Guard.
D. Legacy BIOS:
This is the direct opposite of the correct answer. Selecting Legacy BIOS during VM creation will prevent the administrator from enabling Credential Guard later. Credential Guard has a hard dependency on UEFI and Secure Boot, which Legacy BIOS cannot provide.
Reference:
This requirement is mandated by Microsoft, not Nutanix. The official Microsoft documentation for "Enable Windows Defender Credential Guard" explicitly lists "UEFI firmware version 2.3.1.c or higher" and "Secure Boot" as prerequisites. The Nutanix AHV administration guide, in its section on VM configuration, provides the option to select either "BIOS" or "UEFI" when creating a VM, and it is the administrator's responsibility to select UEFI for VMs that require this level of security. This is a key consideration when deploying secure, modern Windows workloads on the Nutanix platform.
An administrator needs to implement a disaster recovery plan. The company has two sites, each with its own
Nutanix AHV cluster. Latency between the sites is 30ms.
Which built-in functionality in Prism Element could the administrator use to enable disaster recovery between
the two clusters?
A. Recovery Plan
B. Protection Policies
C. Data Protection
D. Metro Cluster
Explanation:
In a Nutanix environment, disaster recovery (DR) is implemented through data replication and recovery orchestration between clusters. When two sites each have their own Nutanix AHV clusters, the administrator can use built-in features within Prism Element to configure asynchronous or synchronous replication.
The built-in functionality that enables disaster recovery between two AHV clusters — especially over higher latency links (such as 30 ms) — is Data Protection.
Hence, the correct answer is C. Data Protection.
Understanding Data Protection in Nutanix
Data Protection in Prism Element provides a native mechanism for:
Replicating VMs and application data between Nutanix clusters.
Configuring protection domains (PDs) to define what to replicate.
Scheduling snapshots and setting replication intervals.
Enabling DR failover/failback operations.
It is a core Prism Element feature, available without requiring Prism Central or any additional licensing for basic functionality.
Data Protection supports asynchronous replication, which is ideal for environments with higher latencies (e.g., 30 ms) — like in the question scenario.
Scenario Fit: 30ms Latency
Metro Availability (synchronous replication) requires low latency (<5 ms) between sites to maintain real-time data consistency.
With 30 ms latency, synchronous replication (used in Metro Cluster) is not suitable, as it would introduce performance degradation.
Therefore, asynchronous replication via Data Protection is the appropriate approach.
In this setup:
Site A replicates snapshots of selected VMs or protection domains to Site B on a scheduled basis (e.g., every 15 minutes, hourly, or daily).
In case of a failure at Site A, Site B can activate the replicated data and bring up the VMs with minimal data loss (RPO = replication interval).
How Data Protection Works
1.Create a Protection Domain (PD):
A logical grouping of VMs to be protected and replicated.
2.Configure Remote Site:
Add the second cluster (remote site) as a replication target under Data Protection > Remote Sites in Prism Element.
3.Schedule Replication:
Define snapshot frequency and retention policy (e.g., replicate every 30 minutes, keep last 10 snapshots).
4.Perform Replication:
Prism Element replicates snapshots of the selected VMs from the source cluster to the target cluster over the network.
5.Failover / Failback:
In a disaster, the administrator can activate the protection domain on the remote site.
Once the primary site is restored, data can be failed back to resume normal operation.
All these actions are configured and managed via Prism Element → Data Protection dashboard.
Why the Correct Answer is C. Data Protection
Data Protection is the native Nutanix feature that provides disaster recovery between clusters.
It supports asynchronous replication, which is the correct method when latency exceeds 5 ms, as in this case (30 ms).
It is accessible directly from the Prism Element UI under Data Protection → Protection Domains.
It provides snapshot-based replication, failover, and failback, fulfilling all DR requirements.
Therefore, Data Protection is the most appropriate choice.
Why Other Options Are Incorrect
A. Recovery Plan:
Recovery Plans are part of Prism Central, not Prism Element.
They provide automated orchestration of DR failover and failback, but they rely on Data Protection policies defined in Prism Element.
Since the question specifies “built-in functionality in Prism Element”, Recovery Plan is not applicable.
B. Protection Policies:
Protection Policies are configured in Prism Central, not in Prism Element.
They provide centralized management for replication and recovery workflows across multiple clusters.
The question explicitly mentions Prism Element, so this option is incorrect.
D. Metro Cluster:
Metro Cluster (also known as Metro Availability) uses synchronous replication for zero RPO between sites.
However, it requires <5 ms latency and stretched Layer 2 network connectivity between sites.
Since the scenario has 30 ms latency, Metro Cluster is not feasible and would cause performance issues.
References
Nutanix AOS Administration Guide v6.10:
“The Data Protection feature in Prism Element allows you to create protection domains and schedule asynchronous replication to remote clusters for disaster recovery.”
🔗 Nutanix AOS Administration Guide – Data Protection
Nutanix Support KB: Understanding Replication Options:
“Use asynchronous replication (Data Protection) for DR setups with WAN latencies greater than 5 ms. Metro Availability is supported only for latency below 5 ms.”
🔗 Nutanix KB: Replication and Disaster Recovery Best Practices
NCA v6.10 Exam Blueprint:
Under “Data Protection and Disaster Recovery,” candidates must understand how to configure protection domains and replication in Prism Element.
At what time does LCM auto-inventory run by default?
A. 1:00am
B. 2:00am
C. 3:00am
D. 4:00am
Explanation:
Life Cycle Manager (LCM) performs automated daily inventory scans to check for available software and firmware updates across cluster components. This process is strategically scheduled during typical low-utilization hours to minimize any potential impact on production workloads. The default configuration across Nutanix clusters sets this automated inventory to execute precisely at 2:00 AM local cluster time.
The inventory process systematically examines all key components including:
Acropolis Operating System (AOS)
Acropolis Hypervisor (AHV)
Host BIOS and BMC firmware
SSD/HDD firmware
Controller and network driver versions
This comprehensive scan compares current versions against the Nutanix online compatibility database, with results populating Prism's "Available Updates" view. While this is the default schedule, administrators can modify the timing through LCM settings to align with specific operational maintenance windows.
Analysis of Incorrect Options
A. 1:00 AM:
While this represents a common maintenance window in many IT environments, it is not LCM's predetermined default inventory time. The 2:00 AM default provides an additional buffer period after midnight operations might complete.
C. 3:00 AM:
This incorrect option likely stems from confusion with other enterprise backup or maintenance schedules. Though logically plausible for low-activity periods, it does not reflect Nutanix's specific 2:00 AM LCM default configuration.
D. 4:00 AM:
This is too late for the default setting, as it approaches times when early-batch processing or morning operations might begin in some organizations. The actual default occurs two hours earlier to ensure completion before business hours resume.
Reference:
The Nutanix Life Cycle Management documentation, specifically the "LCM Configuration and Operations" guide, explicitly confirms the 2:00 AM default setting. This timing represents Nutanix's balanced approach between minimizing production impact and ensuring timely update discovery. Administrators can verify and modify this schedule through Prism Element under LCM settings, though the 2:00 AM default remains consistent across new cluster deployments unless specifically altered.
An administrator needs to remove several old VM snapshots.
From which Prism Element dashboard should the administrator complete this task?
A. Tasks
B. Settings
C. VM
D. Storage
Explanation:
In Nutanix AHV, snapshots are a core feature used to capture the state of virtual machines (VMs) at a specific point in time. Over time, old or unnecessary snapshots can accumulate, consuming storage and impacting performance.
To remove old VM snapshots, an administrator must interact directly with the VM-level interface in Prism Element. This is because snapshots are created, listed, and managed at the individual VM level, not at the cluster, storage, or system settings level.
Hence, the correct answer is C. VM.
Understanding Snapshot Management in Prism Element
Prism Element provides a VM-centric interface where administrators can:
1.View all snapshots for a VM:
Navigate to the VM dashboard.
Select the VM in question and click on the Snapshots tab.
This displays a list of all existing snapshots, including metadata such as creation date, size, and description.
2.Delete snapshots:
Administrators can select individual snapshots or multiple snapshots for deletion.
Prism Element also allows bulk deletion to clean up old snapshots efficiently.
Revert or clone snapshots (optional):
The same VM dashboard allows reverting the VM to a previous snapshot or cloning a snapshot to create a new VM.
Snapshots are tied directly to the VM object, so the VM dashboard is the logical place for this operation.
3.Steps to Remove Old Snapshots
Log in to Prism Element using admin credentials.
Navigate to the VM dashboard:
Click VMs in the main menu to list all VMs in the cluster.
4.Select the Target VM:
Click on the VM for which snapshots need to be removed.
5.Go to the Snapshots Tab:
This tab lists all snapshots for the selected VM, including date, size, and any notes.
6.Select Snapshots to Delete:
Choose individual or multiple snapshots.
7.Click Delete:
Confirm deletion. Prism Element then removes the snapshot(s) and frees associated storage.
By performing these steps at the VM level, the administrator ensures that only the intended snapshots are deleted, without affecting other VMs or system-level storage objects.
Why the Correct Answer is C. VM
Snapshots are VM-specific: Each snapshot belongs to a particular VM, so they can only be managed from the VM dashboard.
Prism Element organizes snapshot operations by VM, making it the natural place for deletion.
Deleting from any other dashboard (Tasks, Settings, or Storage) is either not possible or not precise, as these dashboards do not provide direct snapshot management.
Why Other Options Are Incorrect
A. Tasks:
The Tasks dashboard shows all ongoing and completed operations in the cluster, such as snapshot creation, VM power operations, or migrations.
While it may display a snapshot deletion task, it cannot be used to initiate deletion of snapshots.
Tasks are logs of operations, not a management interface for snapshot removal.
B. Settings:
The Settings dashboard is used for cluster-level configuration, network setup, authentication, alerts, and other administrative tasks.
It does not provide snapshot management, so snapshots cannot be deleted from here.
D. Storage:
The Storage dashboard shows container-level storage usage, capacity, and performance metrics.
While it gives insight into storage consumed by snapshots, it does not allow direct deletion of individual VM snapshots.
Deleting snapshots through storage-level actions could be dangerous and is not supported in Prism Element.
References
Nutanix Prism Element Administration Guide (v6.10):
“Snapshots are managed at the VM level. To delete old snapshots, navigate to the VM in Prism Element, select the Snapshots tab, and remove the snapshots as needed.”
🔗 Prism Element Administration Guide – Snapshots
Nutanix Support KB: Managing Snapshots in AHV:
“Snapshot creation, deletion, and restoration are performed from the VM dashboard in Prism Element. Storage or Tasks dashboards are not used for snapshot management.”
🔗 Nutanix KB – Snapshot Management
NCA v6.10 Exam Blueprint:
Under “VM Management,” candidates are expected to know how to delete snapshots from the VM dashboard in Prism Element.
An administrator has spent time correcting specific issues that have been identified by NCC Health Checks in
Prism Element.
How can just the checks that previously did not pass be executed again to confirm they are all resolved?
A. Run LCM Pre-Upgrade to trigger NCC Checks.
B. Run ncc health checks run_all.
C. Select Run Check for each check worked.
D. Select Only Failed And Warning Checks.
Explanation:
After correcting issues identified by the Nutanix Cluster Check (NCC) utility, the most efficient way to re-verify only the problematic components is to use the "Only Failed And Warning Checks" filter option within the Prism Element Health dashboard. This feature allows the administrator to execute NCC checks selectively, targeting exclusively those checks that previously failed or generated warnings, thereby saving time and resources compared to a full health scan.
Operational Path in Prism Element:
Navigate to the Health dashboard.
Go to the Checks tab (or similar section for NCC).
Look for an option to run health checks, typically presented with a dropdown or filter selection.
Select "Only Failed And Warning Checks" from the available options.
Initiate the check run. NCC will now execute only the subset of checks that previously reported failures or warnings, providing a focused confirmation that the issues have been resolved.
Analysis of Incorrect Options
A. Run LCM Pre-Upgrade to trigger NCC Checks.
While Life Cycle Manager (LCM) does automatically run specific NCC checks as part of its pre-upgrade validation process, this is not the correct method for targeted re-checking of resolved issues. This action runs a predefined set of upgrade-related checks, not necessarily the exact same checks that previously failed. It is an indirect and incomplete method for this specific goal.
B. Run ncc health_checks run_all.
This command, executed from the Command Line Interface (CLI) on a CVM, will trigger the entire suite of NCC health checks. This is the equivalent of a "full scan." While it would eventually verify the fixed issues, it is inefficient as it executes hundreds of unnecessary checks that previously passed, consuming more time and cluster resources than the targeted approach available in the Prism GUI.
C. Select Run Check for each check worked.
This implies manually locating and re-running every single check that had failed, one by one. While technically possible, this is a highly manual, time-consuming, and error-prone process. It is not a scalable or recommended method when Prism Element provides a built-in, automated option to re-run all failed and warning checks with a single selection.
Reference:
This functionality is part of the integrated NCC features within Prism Element. The official Nutanix documentation, such as the "Nutanix Cluster Check (NCC) Guide" or "Monitoring Cluster Health," describes the process for running health checks. It specifically highlights the ability to run comprehensive checks or to use filters—like "Only Failed And Warning Checks"—for targeted troubleshooting and verification, which is the most efficient way to confirm the resolution of previously identified problems.
An administrator receives the alert:
What is the most likely cause?
A. Other nodes in the cluster may not have enough resource available.
B. Another node in this cluster is already in maintenance mode.
C. This node in the cluster is already in maintenance mode.
D. This node in the cluster may not have enough resources available.
Explanation
When a Nutanix cluster alert occurs, indicating maintenance mode issues, the most likely cause is that another
node in the cluster is already in maintenance mode. Nutanix clusters enforce restrictions to prevent more than
one node from entering maintenance mode simultaneously. This ensures the cluster can maintain resiliency
and availability, adhering to the redundancy factor (RF) policy.
If another node is already in maintenance mode, the cluster cannot afford to have additional nodes enter
maintenance without compromising data and service availability.
References:
Nutanix Cluster Best Practices Documentation - Maintenance Mode.
Nutanix Alerts and Events Guide, accessible via Prism UI.
An administrator needs to create a VM Template from an existing VM.
What is required for this action to be successful?
A. Sysprep or Cloud-init script.
B. The VM is powered on.
C. Windows OS is installed.
D. The VM is powered off.
Explanation
To create a VM template from an existing VM in Nutanix, the VM must be powered off. This ensures that the
template is consistent and does not include any transient data or activity from the VM. Once the VM is
powered off, the administrator can use the Prism UI to take a snapshot and designate it as a template.
A. Sysprep or Cloud-init script:These are optional for post-deployment configuration, not mandatory
for creating a template.
B. The VM is powered on:This is incorrect, as the VM must be powered off.
C. Windows OS is installed:The OS type is irrelevant for creating a template.
D. The VM is powered off:Correct, as creating a template requires the VM to be in a powered-off state.
References:
Nutanix AHV Administration Guide: Nutanix Documentation
Nutanix University Certification Content for NCP and NCM.
| Page 1 out of 5 Pages |