A consultant configures an ESXi cluster which will utilize a vSphere Distributed Switch (vDS). The consultant has just migrated the first host to the dvSwitch when several alerts appear within Prism Element regarding Host-to-CVM connectivity. What is causing these alerts?
A. The consultant migrated the CVM Backplane and VM network adapter over to the vDS.
B. The consultant migrated the CVM svm-iscsi-pg network adapter over to the vDS.
C. The consultant migrated the CVM Backplane network adapter over to the vDS.
D. VLAN tags are incorrectly configured on the vDS port groups.
Explanation:
In a Nutanix cluster running ESXi, each Controller VM (CVM) uses several virtual network interfaces to communicate — one of which is the Backplane network adapter.
The Backplane network is used for critical intra-cluster communication between CVMs and the hypervisor host (Host–CVM communication).
If this adapter is misconfigured or moved incorrectly, Prism Element will immediately generate Host-to-CVM connectivity alerts, as the cluster’s internal communication is interrupted.
Root Cause:
When the consultant migrated the ESXi host’s networking from a vSphere Standard Switch (vSS) to a vSphere Distributed Switch (vDS), the CVM Backplane NIC was also moved to the new vDS.
This caused:
Loss of management and backplane connectivity between the CVM and its host.
Disrupted communication with other CVMs in the cluster.
Prism Element to trigger Host-to-CVM connectivity alerts.
The Backplane network should not be moved to a vDS during migration, unless:
The migration is done in a controlled and validated manner, and
Proper port groups, VLANs, and NIC bindings are configured first.
Why the Other Options Are Incorrect:
A. The consultant migrated the CVM Backplane and VM network adapter over to the vDS – ❌
While partially true (Backplane issue is correct), the VM network adapter (used for guest traffic) migration would not cause Host–CVM alerts. Only the Backplane NIC migration breaks CVM communication with the host.
B. The consultant migrated the CVM svm-iscsi-pg network adapter over to the vDS – ❌
The svm-iscsi-pg interface handles iSCSI data services (if configured), not management or host communication. Migrating it would not trigger Host–CVM connectivity alerts.
D. VLAN tags are incorrectly configured on the vDS port groups – ❌
Incorrect VLAN tagging can cause network issues, but the specific “Host-to-CVM connectivity” alert directly points to a lost link between the host and the CVM backplane — not a VLAN mismatch.
Reference:
Nutanix vSphere Networking Best Practices Guide
“Do not migrate the CVM backplane or management interfaces to a vSphere Distributed Switch until connectivity has been verified and redundancy ensured. Loss of the backplane interface will result in Host–CVM connectivity alerts.”
Nutanix Field Installation Guide
“The CVM backplane network is used for cluster communications. Migration errors or incorrect port group mapping will trigger Host–CVM connectivity alarms.”
Summary:
The Host-to-CVM connectivity alerts appear because the CVM’s Backplane network adapter was mistakenly migrated to the vSphere Distributed Switch, breaking internal communication.
An administrator needs to forecast infrastructure requirements for a new program and its associated applications. Prior to the projected start of the new program, all existing applications will be decommissioned. How should the administrator perform this task?
A. Check the Disregard Existing Workloads radio button in the Runway scenario.
B. Check the Disregard Existing Nodes radio button in the Runway scenario.
C. Add up the recovered workloads and manually remove from the Runway configuration.
D. Power down the workloads during a maintenance window and run the Capacity Runway.
Explanation:
The Nutanix Capacity Runway feature in X-Play is designed specifically for forecasting. It analyzes historical resource consumption trends (for storage, memory, and CPU) to predict when the cluster will run out of capacity.
The key requirement in this scenario is that all existing applications will be decommissioned before the new program starts. This means the future capacity forecast should be based only on the resource demands of the new program's applications, ignoring the consumption of the current, soon-to-be-retired workloads.
The "Disregard Existing Workloads" option is built for this exact purpose. When this radio button is selected, the Capacity Runway calculation excludes the historical usage data of all existing VMs. The forecast is then based purely on any new VMs you specify or on the general "unused" capacity growth trend, providing a clean slate for modeling the new program's requirements.
Why the Other Options Are Incorrect
B. Check the Disregard Existing Nodes radio button in the Runway scenario.
Reason for Incorrectness: This option is used when you plan to remove physical nodes from the cluster. It is irrelevant to this scenario, which is about decommissioning workloads (VMs), not the physical hardware. The nodes will remain to host the new applications.
C. Add up the recovered workloads and manually remove from the Runway configuration.
Reason for Incorrectness: This is an unnecessary manual process that is prone to error. The "Disregard Existing Workloads" feature automates this exclusion instantly and accurately. Manually calculating and adjusting is inefficient and not a recommended practice when a dedicated, automated feature exists.
D. Power down the workloads during a maintenance window and run the Capacity Runway.
Reason for Incorrectness: This is impractical and disruptive. Powering off VMs does not remove their historical consumption data from the analytics database that Runway uses for its forecast. The Runway analysis is based on medium-to-long-term trends, not a momentary snapshot. Therefore, this action would not achieve the goal of excluding the old workloads from the forecast model.
Reference
Nutanix Prism Central Guide - Capacity Runway:
The official documentation for the Capacity Runway feature explains the function of the "Disregard Existing Workloads" option. It is described as the method to create a forecast that ignores the resource consumption of current VMs, which is the precise action needed to model infrastructure requirements for a new set of applications replacing old ones.
An administrator has a custom backup application that requires a 2TB disk and runs in Windows. Throughput is considerably lower than expected.
The application was installed on a VM with the following configuration:
• Four vCPUs with one core/vCPU
• 4GB of Memory
• One 50GB vDisk for the Windows installation
• One 2TB vDisk for the application
What is the recommended configuration change to improve throughput?
A. Increase the number of cores per vCPU
B. Increase the vCPUs assigned to the VM
C. Span the 2TB disk across four vDisks
D. Add 4GB of memory to the VM
Explanation:
The primary issue is low throughput on a single, very large (2TB) virtual disk. In the Nutanix AHV environment (and this principle applies to other hypervisors as well), a single vDisk is managed by a single Controller VM (CVM). This means all I/O for that disk must be processed by one CVM, which can become a bottleneck for intensive, sequential workloads like backups.
The recommended best practice for high-throughput workloads is to stripe the data across multiple smaller vDisks. This leverages the distributed nature of the Nutanix storage fabric:
1.Parallel I/O Paths:
When you span the data across four vDisks (e.g., using a Windows Storage Spaces striped volume), each vDisk is managed by a potentially different CVM in the cluster.
2.Increased Aggregate Throughput:
This allows the I/O requests to be serviced in parallel by multiple CVMs and their associated SSD and disk resources, significantly increasing the total available throughput to the application.
3.Eliminates Single CVM Bottleneck:
It distributes the load, preventing any single CVM from becoming the performance limit for the application.
Why the Other Options Are Incorrect
A. Increase the number of cores per vCPU:
Reason for Incorrectness: In virtualized environments, the operating system sees a "core" and a "vCPU" as the same thing. The configuration of "four vCPUs with one core/vCPU" is essentially just four vCPUs. Changing this topology (e.g., to two vCPUs with two cores each) provides no performance benefit for a storage-throughput-bound application. The bottleneck is the storage I/O path, not the CPU topology.
B. Increase the vCPUs assigned to the VM:
Reason for Incorrectness: While some backup applications can be CPU-intensive during compression or deduplication, the problem described is specifically "throughput is considerably lower than expected." The existing configuration of 4 vCPUs is likely sufficient unless CPU utilization metrics show it is consistently saturated. Adding more vCPUs will not solve a fundamental storage I/O bottleneck and can even introduce minor CPU scheduling overhead.
D. Add 4GB of memory to the VM:
Reason for Incorrectness: Adding memory can help if the application is suffering from excessive paging due to lack of RAM. However, the symptom is low throughput (MB/s), not high latency or general slowness due to swapping. The 4GB configuration is adequate for a standard Windows workload, and the 2TB disk points to the I/O path as the clear bottleneck, not memory.
Reference:
Nutanix Best Practices - Virtual Machine Configuration
The official best practices guides recommend using multiple vDisks striped together at the guest OS level for high-performance workloads that require high sequential throughput (e.g., database logs, backup targets, large file processing). This is a standard performance optimization technique to parallelize I/O and leverage the distributed storage architecture effectively.
A consultant creates a new cluster using ESXi as the hypervisor. After creating the cluster, the consultant begins to run Life Cycle Manager (LCM) updates. During the LCM upgrade pre-checks, an error is returned. Which configuration is causing this issue?
A. ESXi cluster admission control is disabled.
B. ESXi cluster DRS is enabled.
C. ESXi cluster HA is enabled.
D. ESXi cluster admission control is enabled.
Explanation:
vSphere Admission Control is a feature of vSphere High Availability (HA) that reserves capacity within a cluster to ensure failover for VMs in case of a host failure.
Life Cycle Manager (LCM) performs comprehensive pre-checks before any upgrade to ensure the process will be safe and successful. A key requirement for LCM to perform a rolling hypervisor upgrade is that it must be able to live migrate (vMotion) all VMs off a host before placing it into maintenance mode and upgrading it.
If Admission Control is enabled with a strict policy (e.g., specifying a specific host failure tolerance), it can prevent a host from entering maintenance mode. vSphere's Admission Control logic may determine that evacuating a host would violate the configured failover capacity rules, leaving the cluster in an "unprotected" state. Consequently, it will block the operation, which in turn causes the LCM pre-check to fail.
Therefore, to allow LCM to proceed with its rolling upgrade process, Admission Control must be temporarily disabled.
Why the Other Options Are Incorrect
A. ESXi cluster admission control is disabled.
Reason for Incorrectness: This is the desired state for running LCM upgrades. If admission control is already disabled, it would not cause a pre-check failure; it would satisfy the prerequisite.
B. ESXi cluster DRS is enabled.
Reason for Incorrectness:
Distributed Resource Scheduler (DRS) is actually beneficial and recommended for LCM upgrades. DRS automates the live migration (vMotion) of VMs away from the host that LCM is preparing to upgrade. Having DRS enabled helps the process, it does not hinder it.
C. ESXi cluster HA is enabled.
Reason for Incorrectness: Enabling vSphere HA by itself is not the problem. The issue is specifically the Admission Control sub-feature of HA. LCM can successfully operate in clusters where HA is enabled, as long as Admission Control is disabled to allow hosts to enter maintenance mode freely.
Reference:
Nutanix Life Cycle Manager (LCM) Guide - Prerequisites for ESXi Upgrades:
The official documentation explicitly lists vSphere Admission Control as a feature that must be disabled before running hypervisor updates with LCM. The pre-check failure described is a direct result of this specific configuration conflict, as LCM cannot guarantee it can evacuate hosts without being blocked by the admission control policy.
An organization is planning an upgrade to AOS 5.15 and wants to understand which cluster products and/or services are supported for physical traffic isolation.
Which Nutanix component supports its network traffic being isolated onto its own virtual network?
A. Volumes
B. Objects
C. Containers
D. Files
Explanation:
In Nutanix AOS 5.15, Volumes (also known as Nutanix Volumes or Volume Groups) are the only component among the listed options that support physical traffic isolation. This capability allows block storage traffic to be routed over a dedicated virtual network, separate from other cluster services, ensuring performance and security for external iSCSI clients.
🔹 Why Volumes support traffic isolation
Nutanix Volumes expose block storage via iSCSI protocol, typically used by physical servers or non-AHV hypervisors.
Administrators can configure svm-iscsi network interfaces on Controller VMs (CVMs) to use dedicated VLANs or virtual switches, isolating iSCSI traffic from management, replication, or VM traffic.
This setup is essential for environments with external SAN replacement workflows, where predictable throughput and isolation are required.
“You can isolate Nutanix Volumes traffic by configuring dedicated virtual networks for iSCSI access.”
Reference
Nutanix Volumes Guide – Network Configuration
❌ Why the other options are incorrect:
B. Objects:
Nutanix Objects is an S3-compatible object storage service. It uses internal cluster networking and does not support physical traffic isolation via dedicated virtual networks.
C. Containers:
Containers in Nutanix refer to logical storage units within the cluster. They are accessed via NFS or SMB and do not support isolated physical traffic paths.
D. Files:
Nutanix Files provides file services (SMB/NFS) for VMs and users. While it supports multi-tenancy and access control, it does not offer physical traffic isolation via dedicated virtual networks.
🔹 Operational Impact
Isolating Volumes traffic improves security, performance, and compliance in hybrid environments.
It allows external hosts to access Nutanix block storage without interfering with cluster operations.
Proper configuration involves setting up svm-iscsi-pg port groups, VLAN tagging, and validating CVM NIC bindings.
A consultant is onsite and needs to start their Foundation Virtual Machine (FVM) to image Nutanix hardware. The FVM fails to boot. The consultant decides to use a different Foundation method. Assuming the nodes will discover, which Foundation type or configuration option should the consultant select?
A. Foundation VM
B. Use IPMI IPs
C. Use IPMI MACs
D. Foundation Applet
Explanation:
If the Foundation Virtual Machine (FVM) fails to boot, the consultant can still perform node imaging by using the Foundation Applet — a lightweight, browser-based version of Foundation that runs directly from a local workstation or laptop.
Foundation Applet is specifically designed as an alternative imaging method when:
The Foundation VM is not available or fails to start.
The consultant is working onsite and needs to image nodes that can self-discover on the same subnet.
There is no preconfigured Foundation infrastructure available.
When nodes are powered on and connected to the same Layer 2 network as the consultant’s laptop, the Foundation Applet will automatically discover them via multicast, and imaging can proceed directly through the local browser.
Why This Is Correct:
The question states:
“Assuming the nodes will discover…”
That means the nodes can be detected automatically on the network — exactly how Foundation Applet operates.
The Foundation Applet does not rely on a pre-booted virtual machine or server; it simply runs on a laptop connected to the same network, making it ideal for field deployments or backup imaging scenarios.
Why the Other Options Are Incorrect:
A. Foundation VM – ❌
This method has already failed to boot. Using the same option again would not resolve the issue.
B. Use IPMI IPs – ❌
The “Use IPMI IPs” option is used when imaging via Out-of-Band (OOB) management interfaces (IPMI/BMC), not when nodes are auto-discovered on the same subnet.
It also requires the Foundation VM or Applet to be functional.
C. Use IPMI MACs – ❌
Similar to using IPMI IPs, this option applies to out-of-band imaging and is not used when the nodes are discoverable automatically.
Reference:
Nutanix Foundation Guide
“Foundation Applet is a lightweight browser-based deployment tool that can be used when the Foundation VM is not available. It can discover unconfigured nodes on the same subnet and perform full imaging operations.”
🔗 Nutanix Foundation Guide – portal.nutanix.com
Nutanix Field Installation Guide
“When Foundation VM cannot be used, the Foundation Applet provides equivalent imaging functionality directly from a laptop or desktop system.”
Summary:
When the Foundation VM fails to boot, and nodes can be discovered on the same network, the consultant should use the Foundation Applet method to perform imaging and cluster creation.
What must be set to enable node discovery when using Foundation VM?
A. Host-only Adapter
B. Bridged Adapter
C. Network Address Translation
D. Internet Access
Explanation:
For the Foundation VM to discover the bare-metal Nutanix nodes on the network, it must be on the same Layer 2 broadcast domain as the nodes' management and BMC (IPMI) interfaces.
A Bridged Adapter network configuration in the hypervisor (like VMware Workstation or ESXi) directly connects the Foundation VM's virtual network adapter to the physical network. It effectively assigns the VM a direct presence on the physical network, as if it were another physical machine. This allows the Foundation VM to:
Send and receive broadcast/multicast packets needed for node discovery (via Wake-on-LAN and other discovery protocols).
Communicate directly with the IP addresses of the nodes once they are powered on.
Without a bridged connection, the Foundation VM would be isolated in a private virtual network and unable to see or communicate with the physical nodes waiting to be imaged.
Why the Other Options Are Incorrect
A. Host-only Adapter:
Reason for Incorrectness: This configuration creates a private network shared only between the Foundation VM and the hypervisor host. It completely isolates the VM from the physical network where the nodes are located, making discovery impossible.
C. Network Address Translation (NAT):
Reason for Incorrectness: NAT allows the Foundation VM to access external networks (like the internet) through the host's IP address, but it does not allow inbound connections or participation in the local network's broadcast domain. The Foundation VM cannot receive discovery responses from the nodes, so this method will fail.
D. Internet Access:
Reason for Incorrectness: While the Foundation VM needs internet access to download software bundles from the Nutanix portal, internet access alone is not sufficient for node discovery. Discovery requires local network layer 2 connectivity, which is not guaranteed by just having a route to the internet.
Reference:
Nutanix Foundation Guide - Prerequisites:
The official documentation explicitly states the network requirement for the Foundation VM: "The host on which the Foundation VM runs must be connected to the same network as the nodes you want to configure and must have a bridged network connection." This is a mandatory prerequisite for the discovery phase to function correctly.
A consultant is completing final network failover testing on a recently deployed AHV cluster.
The cluster consists of the following:
•Three nodes
•Two uplinks connected to two switches
•Supported SFPs
The consultant removed one SFP+ out of each node. Two of the nodes (Hostl I Host2) failed over to the backup NIC correctly without any interruption but have lost communication with the third node (Host 3) hypervisor/CVM. Inserting the original SFP+ back into the problematic node allowed the network to be connected and the two healthy nodes could communicate successfully again.
manage_ovs showjnterfaces output
manage_ovs showjinterfaces output:
name mode link speed
cthO 10000 True 10000
ethl 10000 False None
eth2 10000 True 10000
eth3 10000 False None
manage_ovs show.uplinks output:
Bridge: brO
Bond: brO-up
bond_mode: active-backup
interfaces: ethl ethO
lacp: off
lacp-fa I 1 back: false
lacp_speed: slow
What should the consultant validate first?
A. Validate NIC card on node is seated properly.
B. Validate that the correct interfaces are included in the bridge.
C. Validate the customer has configured all their switch ports identically.
D. Validate the SFP+ is fully plugged into the node and switch.
Explanation:
The issue described — where Host 3 loses connectivity after SFP+ removal, and regains it only after reinsertion — strongly suggests a physical layer fault, most likely due to an improperly seated SFP+ transceiver or cable.
The manage_ovs show_interfaces output shows eth1 link status as False and speed as None, indicating no physical link.
In an active-backup bond, if the active NIC (eth0) is removed, the system should fail over to eth1. But if eth1 has no link, failover fails.
Since reinserting the SFP+ restores connectivity, the most probable cause is that the SFP+ was not fully seated or the cable was loose.
❌ Why other options are incorrect:
A. Validate NIC card on node is seated properly:
NIC seating issues would cause persistent failure, not one resolved by reinserting the SFP+.
B. Validate correct interfaces in the bridge:
The bond includes eth0 and eth1, which is correct. No misconfiguration is indicated in the manage_ovs show_uplinks output.
C. Validate switch port configuration:
While important, switch misconfiguration would affect all nodes. Here, only Host 3 is impacted, pointing to a local physical issue.
🔗 Reference:
Nutanix AHV Networking Guide – NIC Bonding and Failover
“In active-backup mode, failover occurs only if the backup NIC has a valid link. Physical connectivity must be verified.”
An administrator needs to replace an aging SAN and move to a hyper-converged infrastructure. The existing environment consists of the following hosts that are connected to the SAN:
• 5xAIX hosts
• 3x Hyper-V hosts
• 9xESXi hosts
• 2x physical SQL Clusters (Windows Server 2012R2 hosts)
After deploying a Nutanix AHV cluster, which two actions should the administrator take to meet the requirements? (Choose two.)
A. Deploy Volumes to support the AIX and SQL workloads.
B. Migrate the ESXI workloads to AHV using Move.
C. Deploy Files to support the AIX hosts.
D. Migrate the ESXi and Hyper-V workloads using Move.
An administrator receives reports about a Nutanix environment. The investigation finds the following;
• VMs are experiencing very high latency
• Each node is equipped with a single SSD, utilized at 95%
• Each node is equipped with three HDDs, utilized at 40%
Why are the guest VMs experiencing high latency?
A. CVMs are overwhelmed by disk balancing operations.
B. All VM write operations are going to HDD.
C. All VM read operations are coming from HDD.
D. VMs are unable to perform write operations.
An administrator receives an error indicating that the CVMs in the cluster are not syncing to any NTP servers. An investigation of the issue finds: • The NTP servers are configured in Prism • The time on all CVMs is the same • Both the CVMs and AHV hosts are configured for the UTC time zone Which two steps can be taken to troubleshoot this issue? (Choose two.)
A. Confirm that the NTP servers are reachable from the CVMs.
B. Restart genesis on all CVMs.
C. On a CVM. run the command a Hash ntpq -pn.
D. Restart the chronos service on all CVMs.
An administrator is deploying Nutanix Files 3.5 and needs to configure the sizing of the FSVMs for an increased number of concurrent SMB connections over the default 750. What should the administrator do?
A. Deploy the Files VMs. power down the three FSVMs. change the CPU and RAM via Prism, and then power the three FSVMs back up
B. During installation, click Customize on the File Server Installation screen, change the number of connections, and finish the installation
C. Complete the default installation change the CPU and RAM m Prism, and then log into the File Server dashboard and change the Filer Server Properties
D. During installation, input the correct number of connections in the File Server Installation screen and complete the installation
| Page 2 out of 25 Pages |
| Previous |