An administrator manages the following two Nutanix AOS 5.15 cluster environment:
• Corp-cluster-01
• Corp-cluster-02
The VM images must be available only on Corp-cluster-01, but cannot be checked out to cluster Corp-cluster-02. The images also cannot be checked out to any other clusters that are registered with Prism Central in the future. Which two configuration settings must the administrator choose when creating the image placement policy that satisfies the stated requirements? (Choose two.)
A. Create an image placement policy that identifies cluster Corp-cluster-01 as the target cluster
B. Set the policy enforcement to Soft.
C. Set the policy enforcement to Hard.
D. Create an image placement policy that identifies cluster Corp-cluster-02 as the target cluster.
Explanation:
The requirement is very specific: VM images must only be on Corp-cluster-01 and cannot be checked out to any other cluster, now or in the future. This demands a strict, non-negotiable configuration.
A. Create an image placement policy that identifies cluster Corp-cluster-01 as the target cluster:
This action defines the scope of the policy. By specifying Corp-cluster-01 as the target, you are instructing the Image Service that this is the sole, authorized location for the specified images. This prevents the images from being initially placed on any other cluster.
C. Set the policy enforcement to Hard:
This action defines the behavior of the policy. A Hard enforcement policy is mandatory and prevents any violation. In this context, it will block the checkout operation of the image to any other cluster, including Corp-cluster-02 and any future clusters. This is the only setting that guarantees the requirement that images "cannot be checked out" to other clusters.
Combined Effect: The "Hard" placement policy on Corp-cluster-01 acts as a strict pinning rule. It ensures images are stored only on Corp-cluster-01 and actively enforces this by preventing any operation (like checking out a VM) that would require transferring the image to another cluster.
Why the Other Options Are Incorrect
B. Set the policy enforcement to Soft.:
Reason for Incorrectness: A "Soft" enforcement policy is an advisory or warning policy. It will log a warning message if a user tries to check out an image to a non-target cluster (like Corp-cluster-02), but it will allow the operation to proceed. This violates the core requirement that the images "cannot be checked out" to other clusters. A soft policy does not provide the mandatory enforcement needed.
D. Create an image placement policy that identifies cluster Corp-cluster-02 as the target cluster.
Reason for Incorrectness: This does the exact opposite of what is required. Configuring Corp-cluster-02 as the target would encourage or mandate that images be placed there, which directly conflicts with the requirement that they must be available only on Corp-cluster-01.
Reference
Nutanix Image Placement Policies Guide:
The official documentation explains the distinct behaviors of Hard and Soft enforcement for image placement policies.
Hard Enforcement:
"Prevents a user from performing an operation that violates the policy." This is used to meet compliance or strict availability requirements, which matches the scenario's mandate.
Soft Enforcement:
"Allows a user to perform an operation that violates the policy but generates a warning message in the audit log." This is used for guidance but does not prevent the action.
A customer wants to validate the Microsoft SQL workload performance for a CRM application before migration to the new Nutanix cluster. Which activity must the consultant add as part of the test plan to fulfill this requirement?
A. X-Ray with OLAP Workload.
B. Run perfmon on guest VM.
C. X-Ray with OLTP Workload.
D. Run perfmon on all CVMs.
Explanation:
C is correct because Microsoft SQL Server running a CRM application typically uses OLTP (Online Transaction Processing) patterns — characterized by frequent, small transactions like inserts, updates, and deletes. Nutanix X-Ray is the official performance validation tool, and its OLTP workload profile is designed to simulate real-world transactional database behavior, making it ideal for validating SQL performance before migration.
❌ Why the other options are incorrect:
A. X-Ray with OLAP Workload:
OLAP (Online Analytical Processing) simulates large, read-heavy queries typical of data warehousing, not transactional CRM workloads. Using OLAP would misrepresent the actual SQL usage pattern.
B. Run perfmon on guest VM:
While useful for monitoring, perfmon is reactive and lacks standardized benchmarking. It doesn’t simulate workload or provide comparative performance metrics across clusters.
D. Run perfmon on all CVMs:
CVMs (Controller VMs) handle Nutanix cluster services, not guest SQL workloads. Monitoring CVMs won’t validate SQL performance for the CRM application.
🔗 References:
Nutanix X-Ray documentation:
X-Ray Workload Profiles – OLTP vs OLAP
Nutanix SQL Server Best Practices:
Running Microsoft SQL Server on Nutanix
A consultant is planning an installation and needs to collect configuration items to be used during the install. The data needed from the customer are IP addresses, Gateway, DNS servers, and NTP Servers. Which Cluster Deployment document must be completed with the customer?
A. Project Plan
B. Operations Guide
C. Questionnaire
D. Technical Checklist
Explanation:
During a Nutanix cluster installation, the consultant needs to collect specific configuration details from the customer, such as:
IP addresses for nodes and management interfaces
Gateway information
DNS server details
NTP server information
These details are essential for the deployment process and must be gathered before installation.
The Cluster Deployment Questionnaire is the official Nutanix document used to capture all the necessary configuration information from the customer. It ensures that the installation team has all required inputs to proceed without delays.
Why the Other Options Are Incorrect:
A. Project Plan – ❌
A project plan outlines timelines, tasks, milestones, and responsibilities.
It does not capture specific configuration data like IPs or NTP servers.
B. Operations Guide – ❌
The operations guide documents post-deployment procedures and ongoing operational best practices.
It is not used for initial data collection for installation.
D. Technical Checklist – ❌
A technical checklist is usually a step-by-step verification tool used during or after installation to ensure components are configured correctly.
It does not serve as a data-gathering tool from the customer.
Reference:
Nutanix Cluster Deployment Guide
“Prior to installation, a completed Cluster Deployment Questionnaire is required from the customer to collect IP addresses, VLANs, DNS, Gateway, and NTP server information.”
🔗 Nutanix Cluster Deployment Guide – Portal
Nutanix Professional Services Best Practices
“The Questionnaire ensures that all network and system configuration details are collected in advance of deployment to prevent errors and reduce installation time.”
🔗 Nutanix PS Best Practices
Summary:
To collect IP addresses, gateway, DNS servers, and NTP servers from the customer before cluster installation, the consultant must use the Cluster Deployment Questionnaire.
An administrator is responsible for the following Nutanix Enterprise Cloud environment:
• A central datacenter with a 20-node cluster with 1.5PB of storage
• Five remote sites each with a 4-node cluster with 200TB storage
The remote sites are connected to the datacenter via 1GB links with an average latency of 6 ms RTT. What is the minimum RPO the administrator can achieve for this environment?
A. 0 minutes
B. 15 minutes
C. 1 hour
D. 6 hours
Explanation:
To determine the minimum Recovery Point Objective (RPO) in this scenario, we must consider Nutanix replication technologies and their latency requirements:
🔹 NearSync Replication
Nutanix NearSync is designed for low-latency WAN links and supports RPOs as low as 1 minute, but only under optimal conditions.
According to Nutanix documentation, NearSync requires RTT latency below 5ms to achieve 1-minute RPO.
In this case, the WAN latency is 6ms RTT, which exceeds NearSync’s threshold for 1-minute RPO.
Therefore, the system cannot use NearSync at its lowest setting, and falls back to the next supported RPO tier.
🔹 Async Replication
Nutanix Async replication supports RPOs starting at 15 minutes, and is designed for higher latency WAN links.
With 6ms RTT and 1Gbps bandwidth, Async replication is the appropriate choice, and 15 minutes becomes the minimum achievable RPO.
❌ Why other options are incorrect:
A. 0 minutes:
This implies synchronous replication, which is not supported over 6ms RTT WAN links. Sync replication requires sub-2ms latency and high bandwidth, typically within metro clusters.
C. 1 hour and D. 6 hours:
These are valid RPOs for Async replication, but they are not the minimum. Nutanix Async supports RPOs as low as 15 minutes, so these options are unnecessarily high.
🔗 References:
Nutanix NearSync Replication Requirements:
Nutanix Data Protection Guide – NearSync
“NearSync replication supports RPOs as low as 1 minute for clusters with RTT latency under 5ms.”
Nutanix Async Replication:
Nutanix Disaster Recovery Guide
“Async replication supports RPOs starting at 15 minutes for WAN-connected clusters.”
An administrator finds that home shares cannot be configured in a new Files 3.5 deployment. Why is this happening?
A. NFS default access is set to Read Only.
B. Multi-protocol access is not configured.
C. Access Based Enumeration is not enabled.
D. The system is deployed as a single FSVM.
Explanation:
Home shares (or home directories) are a feature in Nutanix Files that automatically creates a dedicated, private network share for each user when they first log in. For this feature to function, the file server must be configured for High Availability (HA).
When Nutanix Files is deployed, you choose between a single FSVM or a multi-FSVM configuration:
Multi-FSVM (Highly Available):
This is the standard, recommended production deployment. It places one File Server VM (FSVM) on each node in the cluster. This configuration supports all features, including Home Shares, because it provides the redundant infrastructure needed for the feature's management and data placement.
Single FSVM (Non-HA):
This is a minimal deployment, often used for test labs or specific use cases that do not require high availability. The Home Shares feature is explicitly disabled and unavailable in a single-FSVM deployment. This is a architectural limitation of the feature, which relies on the multi-node, HA framework.
Therefore, if an administrator is unable to configure home shares in a new deployment, the most probable cause is that the Files domain was deployed with only a single FSVM.
Why the Other Options Are Incorrect
A. NFS default access is set to Read Only:
This setting affects NFS exports and would impact how clients access NFS shares. Home shares are primarily accessed via the SMB/CIFS protocol for Windows users. This NFS setting is unrelated to the ability to configure the home share feature itself in the Files console.
B. Multi-protocol access is not configured:
Multi-protocol access allows a single share to be accessed simultaneously by SMB and NFS clients. While home shares are typically SMB-centric, the lack of multi-protocol configuration does not prevent the home share feature from being enabled or configured in the Files management interface.
C. Access Based Enumeration is not enabled:
Access Based Enumeration (ABE) is a security feature that filters the list of files and folders in a share to only those that the user has access to. This is a configurable setting for a share and does not gate the availability of the Home Shares feature at the global file server level.
Reference
Nutanix Files Administration Guide - Home Shares Prerequisites: The official documentation explicitly lists prerequisites for configuring home directories. It states that a highly available (multi-FSVM) Files domain is required. The deployment wizard also typically presents a warning if you attempt to configure home shares on a single-FSVM deployment, indicating the feature is unavailable.
Nutanix Files Planning Guide - Deployment Types:
The guide differentiates between single-node (non-HA) and multi-node (HA) deployments, noting that advanced features like Home Shares are only available in the HA configuration. This is a fundamental design constraint of the product.
An administrator needs to replace the self-signed certificate on a cluster. Which two requirements must be met as part of the process? (Choose two.)
A. The cluster administrator must restart the interface gateway.
B. The signed, intermediate and root certificates are chained.
C. The existing certificate must be deleted prior to replacement.
D. The imported files for the custom certificate must be PEM encoded.
Explanation:
Replacing the default self-signed certificate with a custom Certificate Authority (CA)-signed certificate is a common task to meet enterprise security policies. The Nutanix Prism interface has specific requirements for this process.
B. The signed, intermediate and root certificates are chained.
This means you must provide a single certificate file that contains the entire trust chain. The order is critical: the server certificate must be first, followed by any intermediate CA certificates, and ending with the root CA certificate. This allows the client to build the trust path from your server's certificate back to a trusted root authority.
D. The imported files for the custom certificate must be PEM encoded.
The Prism certificate upload functionality specifically requires certificates and private keys to be in the Privacy-Enhanced Mail (PEM) format. This is a base64-encoded text format. Common alternative formats like PKCS#12 (.pfx or .p12) or DER-encoded binary files are not accepted and must be converted to PEM before import.
Why the Other Options Are Incorrect
A. The cluster administrator must restart the interface gateway.
Reason for Incorrectness: This is not a manual requirement. When you upload and apply a new certificate through the Prism Element web console (Settings > Certificate), the system automatically handles the service restarts required for the new certificate to take effect, including the Prism Gateway service. A manual restart is neither required nor recommended.
C. The existing certificate must be deleted prior to replacement.
Reason for Incorrectness: The process is a direct replacement. The Prism interface provides an "Upload Certificate" or "Replace Certificate" function. You provide the new certificate chain and private key, and the system overwrites the existing self-signed certificate. There is no need to manually delete the old certificate first; doing so could cause service interruptions.
Reference:
Nutanix Prism Web Console Guide - Managing Certificates:
The official documentation outlines the precise steps for replacing the cluster certificate. It explicitly states the requirement for a PEM-encoded certificate chain file and a PEM-encoded private key file. The process is designed to be a seamless upload-and-apply operation without requiring manual service restarts or deletion of the old certificate.
Where can the Foundation version included with an AOS bundle be found?
A. AOS Release Notes
B. Upgrade Paths menu
C. Compatibility and Interoperability Matrix
D. Field Installation Guide
Explanation:
When Nutanix releases an AOS (Acropolis Operating System) bundle, it includes compatible versions of several key components — such as Foundation, Prism Central, AHV, and other system utilities.
The Foundation version bundled with a specific AOS release is always listed in the AOS Release Notes, which provide detailed information about:
The exact Foundation version packaged with the AOS build
Compatibility details for AHV, Prism Central, and other components
Upgrade prerequisites and supported upgrade paths
Bug fixes and new features
This allows consultants and administrators to confirm which version of Foundation is embedded or recommended before deploying or upgrading a cluster.
Why the Other Options Are Incorrect:
B. Upgrade Paths menu – ❌
The Upgrade Paths tool or menu only shows valid AOS upgrade compatibility (from one version to another).
It does not list the Foundation version included in a specific AOS bundle.
C. Compatibility and Interoperability Matrix – ❌
The Compatibility Matrix lists supported version pairings (AOS, AHV, Foundation, etc.), but it does not specify which Foundation version is packaged with a particular AOS bundle.
It’s used mainly for validation, not for bundle content details.
D. Field Installation Guide – ❌
The Field Installation Guide provides deployment steps and procedures, not specific version details.
It references Foundation usage but does not indicate the version tied to an AOS release.
Reference:
Nutanix AOS Release Notes
“The AOS Release Notes include the Foundation version packaged with this AOS release, as well as compatibility information for Prism Central and AHV.”
🔗 Nutanix AOS Release Notes – portal.nutanix.com
Nutanix Foundation Guide
“To determine the Foundation version included in an AOS bundle, refer to the AOS Release Notes for that version.”
🔗 Nutanix Foundation Guide – portal.nutanix.com
Summary:
The Foundation version that comes bundled with an AOS release is documented in the AOS Release Notes, not in upgrade tools or compatibility matrices.
A customer with a four-node RF2 cluster is adding application VMs to their system. After adding these VMs, the Prism dashboard shows 81% storage utilization. What is the consequence of running the cluster at 81% storage utilization?
A. The customer has the ability to add more VMs up to the 100% storage utilization.
B. There is available capacity in the storage fabric and the cluster is resilient.
C. Node failure is imminent due to storage utilization.
D. The cluster is not resilient in the storage fabric
Explanation:
At 81% storage utilization in a four-node RF2 Nutanix cluster, the system enters a non-resilient state. Nutanix recommends maintaining storage usage below 75% to ensure the cluster can tolerate node failures and re-replicate data. Once usage exceeds this threshold, the cluster may not have enough free space to re-protect data if a node fails, compromising fault tolerance.
🔹 Why D is correct
Nutanix RF2 (Replication Factor 2) stores two copies of data across different nodes.
If one node fails, the system must re-replicate the lost data to maintain redundancy.
At 81% utilization, there may not be enough space to perform this re-replication, making the cluster not resilient.
Prism will show a “Not Resilient” warning in the dashboard when this threshold is breached.
“When storage utilization exceeds 75%, the cluster may not have enough capacity to re-protect data in the event of a node failure.”
Reference
— Nutanix Support KB: Understanding Resilience and Storage Utilization
❌ Why A is incorrect
While technically possible to add more VMs up to 100% utilization, doing so is operationally unsafe.
Beyond 75%, the cluster loses resilience. At 100%, it risks I/O failure, data unavailability, and inability to perform writes.
Nutanix does not recommend running clusters near full capacity.
“Running a cluster near full capacity can lead to degraded performance and loss of resilience.”
Reference
— Nutanix Best Practices Guide
❌ Why B is incorrect
The statement implies the cluster is resilient, which is false at 81% utilization.
Nutanix Prism will flag the cluster as “Not Resilient”, and the system cannot guarantee data protection in case of node failure.
Available capacity alone does not imply resilience — re-protection capability is the key metric.
“Resilience is not just about available capacity; it’s about the ability to re-protect data.”
Reference
— Nutanix Community Thread on RF2 Resilience
❌ Why C is incorrect
High storage utilization does not directly cause node failure.
However, it prevents recovery from node failure, which is the real risk.
The system remains operational, but data loss or unavailability can occur if a node fails and re-replication is impossible.
“Node failure is not caused by high utilization, but high utilization can prevent recovery from failure.”
Reference
— Nutanix Resilience and Capacity Planning Guide
🔹 Operational Impact Summary
Prism dashboard will show “Not Resilient” warning.
Cluster cannot tolerate node failure without risking data loss.
Administrator must free up space or expand the cluster to restore resilience.
Nutanix recommends proactive monitoring and alerting when utilization crosses 70%.
An organization is running a Nutanix Cluster based on AOS 5.10.x and VMware vSphere 6.7. Currently, the CVM network is segmented and Storage only nodes not present. A new security project based on NSX is coming. VMware Distributed Virtual Switches are required. The administrator needs to prepare the environment for the new project. Which step should the administrator use to initiate the project?
A. Enable Nutanix Flow at the Prism Central Level
B. Manually disable CVM network Segmentation
C. Convert storage only nodes into vSphere nodes
D. Enable Jumbo Frames to accommodate network frames
Explanation:
The core requirement for the new project is the implementation of VMware NSX, which necessitates the use of VMware Distributed Virtual Switches (vDS). A critical prerequisite for integrating a Nutanix cluster with a vSphere vDS is that the CVM Network Segmentation feature must be disabled.
Here’s why this is the necessary first step:
1.Incompatibility of Features:
CVM Network Segmentation is a Nutanix feature that creates and manages a dedicated, standard vSwitch (vSS) on each ESXi host specifically for the CVM traffic. This is done to ensure isolation and stability for the critical storage traffic.
2.Conflict with vDS:
A VMware Distributed vSwitch is a centralized, cluster-wide networking construct that replaces the standard vSwitches on individual hosts. You cannot have a vSS (managed by CVM Segmentation) and a vDS controlling the same physical NICs (pNICs) simultaneously for the CVM network.
3.Prerequisite for vDS Migration:
Before you can create a vDS and migrate the CVM and VM traffic to it, you must first dismantle the existing dedicated vSwitch structure. This is achieved by manually disabling the CVM Network Segmentation feature. This process will re-associate the CVM and VM networks with the initial, default vSwitch, preparing the hosts for the subsequent migration to the vDS.
Therefore, disabling CVM Network Segmentation is the foundational and mandatory step to prepare the environment for NSX and vDS.
Why the Other Options Are Incorrect
A. Enable Nutanix Flow at the Prism Central Level:
Reason for Incorrectness: Nutanix Flow is the native Nutanix micro-segmentation solution. It is a direct alternative to VMware NSX, not a prerequisite for it. Enabling Flow would be counterproductive and would create a conflict if the project's goal is to implement NSX.
C. Convert storage only nodes into vSphere nodes:
Reason for Incorrectness: The problem statement explicitly says "Storage only nodes not present." This means all nodes in the cluster are running the vSphere hypervisor and the CVM. Therefore, this step is irrelevant and not possible as there are no storage-only nodes to convert.
D. Enable Jumbo Frames to accommodate network frames:
Reason for Incorrectness: While enabling Jumbo Frames can be a performance tuning step for storage or vMotion traffic, it is not a prerequisite for deploying NSX or vDS. The project's initiation is blocked by a fundamental configuration conflict (Segmentation vs. vDS), not a performance optimization.
Reference:
Nutanix Knowledge Base (KB) 7544 - "How to disable CVM Network Segmentation to migrate to a vSphere Distributed Switch (VDS)": This official KB article directly addresses this scenario. It outlines the precise steps an administrator must take to safely disable CVM Network Segmentation as a prerequisite for migrating to a vSphere Distributed Virtual Switch. This is a well-documented procedure required for any NSX-on-Nutanix deployment.
A consultant successfully creates a Nutanix cluster. The consultant needs to configure containers for the customers business critical applications and general server workloads. The customer requirements are to achieve maximum storage space savings and optimize I/O performance. What setting(s) should the consultant enable on the storage container(s)?
A. Compression and deduplication
B. Deduplication only
C. Compression only
For several days, an administrator notices the following alerts:
• CVM NIC Speed Low Warning Alerts
• Warning Alerts of CVM NIC not performing at optimal speed
• CVM is disconnected from the network Critical Alert
• Network Visualization page shows excessive dropped packets on CVM/Host
Which steps should be taken to determine which problem should be addressed first?
A. • Access the Hardware Page to verify resources are available
• Analyze ail CVM Speed Alerts in the Alerts/Events page
• Analyze output for the network and interface properties and connectivity issues
B. • Verify Host/CVM connectivity on the Network Visualization page
• Use to verify the bridge and bond configuration
• Review alerts/events page for the CVM disconnected error
C. • Review Alerts page for NIC speed alerts and alert timing
• Analyze the genesis.out log file for process failures
• Assess the NIC properties in the Network Visualization > Host Properties page
D. • Restart networking services on the CVM
• Determine the current configuration of the affected CVM via output
• Access the Alerts/Events page for the CVM network connection failures
Explanation:
The alerts indicate a clear progression from a performance degradation to a complete service outage:
1.Performance Warnings:
"CVM NIC Speed Low" and "not performing at optimal speed" suggest a link negotiation or physical layer issue.
2.Service Impact:
"Excessive dropped packets" confirms data is being lost, affecting storage operations.
3.Critical Failure:
"CVM is disconnected from the network" means the storage controller is offline, impacting all VMs on that node.
When troubleshooting, you must always prioritize the most severe, service-impacting issue first. The "CVM disconnected" critical alert represents a complete failure of a core cluster component and is the top priority.
The steps in Option B correctly follow a logical troubleshooting path for this highest-priority issue:
1.Verify Host/CVM connectivity on the Network Visualization page:
This provides a real-time, visual overview of the network health and immediately confirms the disconnection and its scope.
2.Use to verify the bridge and bond configuration:
This is the most likely root cause. Misconfigured bonds (e.g., an incorrect LACP policy) or a failed NIC within a bond can cause the link to flap, negotiate at a lower speed, drop packets, and ultimately lead to a complete disconnection. Checking this configuration is a direct investigation into the core problem.
3.Review alerts/events page for the CVM disconnected error:
Correlating the timing and details of the critical alert with the other warnings helps build a complete timeline of the failure, confirming that the performance warnings were precursors to the ultimate disconnection.
Why the Other Options Are Incorrect
A. • Access the Hardware Page to verify resources are available
• Analyze ail CVM Speed Alerts in the Alerts/Events page
• Analyze output for the network and interface properties and connectivity issues
Reason for Incorrectness: This option focuses on the less severe performance alerts first ("NIC Speed Low") and suggests checking for hardware resource availability, which is unlikely to be the cause of a specific NIC disconnection. It delays addressing the critical "CVM disconnected" alert.
C. • Review Alerts page for NIC speed alerts and alert timing
• Analyze the genesis.out log file for process failures
• Assess the NIC properties in the Network Visualization > Host Properties page
Reason for Incorrectness: This also prioritizes the performance warnings over the critical failure. While checking genesis.out is a valid troubleshooting step, it is more relevant for process crashes, not a clear network disconnection. This approach is less direct for a confirmed network issue.
D. • Restart networking services on the CVM
• Determine the current configuration of the affected CVM via output
• Access the Alerts/Events page for the CVM network connection failures
Reason for Incorrectness:
Starting with a service restart is a reactive "shot in the dark" that does not follow a systematic troubleshooting methodology. It risks causing additional instability without first understanding the root cause (e.g., a bond misconfiguration). You should always diagnose before attempting a fix.
Reference:
Nutanix Troubleshooting Guide - Network Issues:
The recommended methodology is to start with Prism's Network Visualization page to get a holistic view of the problem, then drill down into host and CVM-specific configurations like bond settings. The criticality of a CVM disconnection mandates it be investigated first, as it directly impacts data availability and is the ultimate consequence of the underlying network problem.
From a hypervisor, which IP address should the consultant use to connect to the locally tossed on?
A. 192.168.5.254
B. 172.16.19.2
C. 10.100.5.5
D. 192.168.1.254
Explanation:
When connecting from a hypervisor host (such as AHV, ESXi, or Hyper-V) to the locally hosted Foundation VM during a Nutanix deployment or imaging process, the standard default IP address used by the Foundation VM for local connectivity is:
192.168.5.254
This IP address is assigned automatically to the Foundation virtual machine’s default interface when it runs in local mode (for example, when imaging nodes over a directly connected network or via the hypervisor’s virtual switch).
Administrators or consultants use this IP to:
Access the Foundation UI via a web browser (http://192.168.5.254
).
Start imaging and cluster creation workflows.
Validate network connectivity between the Foundation VM and the target nodes.
Why the Other Options Are Incorrect:
B. 172.16.19.2 – ❌
Not a Nutanix Foundation default IP. May be part of a customer’s internal network, but not used by Foundation for local connectivity.
C. 10.100.5.5 – ❌
Private address, but not associated with Foundation’s default configuration.
D. 192.168.1.254 – ❌
Also a common private address, but the official Foundation default IP for local access is 192.168.5.254, not 192.168.1.254.
Reference:
Nutanix Foundation Guide – Default Configuration
“When Foundation is deployed in local mode, it uses 192.168.5.254 as the default IP address for accessing the web console and managing imaging operations.”
🔗 Nutanix Foundation Guide – portal.nutanix.com
Nutanix Field Installation Guide
“Access the local Foundation VM through your browser using the IP address 192.168.5.254 to begin imaging.”
Summary:
The Foundation VM assigns the default IP 192.168.5.254 for local imaging and setup operations.
Consultants use this IP from the hypervisor host to open the Foundation web UI and perform installations or cluster creation.
| Page 1 out of 25 Pages |