NCP-MCI-6.10 Practice Test Questions

75 Questions


An administrator using adark site deployment for LCMis attempting toupgrade to the latest BIOS. After completing aninventory scan, the administrator doesnot see the expected BIOS versionavailable for upgrade.
What is the most likely reason the latest BIOS is not shown?


A. AOS needs to be upgraded first.


B. The latest compatibility bundle has not been uploaded.


C. The BMC version needs to be upgraded first.


D. The dark site webserver is not accessible.





B.
  The latest compatibility bundle has not been uploaded.

Explanation:
In a dark site (air-gapped) environment, Life Cycle Manager (LCM) cannot connect to the Nutanix update servers to automatically download the latest firmware, drivers, and compatibility information.

The process for updating firmware like BIOS in a dark site is as follows:
Download from Portal: An administrator must manually download the latest Compatibility & Software Bundle from the Nutanix Support Portal on a machine that has internet access.

Upload to LCM:This downloaded bundle file is then uploaded to the Prism Central instance inside the dark site network.

Perform Inventory Scan:
After the upload, an inventory scan is run. This scan processes the newly uploaded bundle, making the latest versions (including the expected BIOS) available for upgrade in the LCM interface.

In this scenario, the administrator completed an inventory scan but the latest BIOS is not shown. The most logical reason is that the scan was performed using an out-of-date local bundle. Without uploading the newest bundle from the portal, LCM has no information about the existence of the newer BIOS version and therefore cannot display it as an available update.

Why the Other Options Are Incorrect
A. AOS needs to be upgraded first.
This is incorrect. While some firmware may have dependencies, LCM is designed to show all available updates regardless of order. It will then present any logical sequence or prerequisites during the pre-check phase before the upgrade is executed. The problem states the version isn't shown at all, which is a discovery issue, not a dependency failure.

C. The BMC version needs to be upgraded first.
This is incorrect for the same reason as option A. A BMC dependency would be flagged as a pre-check warning or failure after selecting the BIOS update for remediation. It would not prevent the newer BIOS version from being discovered and listed in the available updates.

D. The dark site webserver is not accessible.
This is incorrect because, by definition, a dark site has no access to external webservers. The entire LCM workflow in a dark site is designed to function without such access, relying on manually uploaded bundles. If the system were trying and failing to reach an external server, it would likely result in a connection error during the scan, not just a missing update.

Reference:
The Nutanix Life Cycle Manager Guide for dark site deployments explicitly outlines this process. It states that to get the latest updates, you must "Download the Nutanix Software & Compatibility Bundle from the Nutanix Portal" and then "Upload the bundle to Prism Central." Performing an inventory scan without first completing this upload step will only refresh the inventory against the old, previously uploaded bundle, explaining why the latest BIOS is not discovered.

If an administratorcreates a report with no retention policy configured, how many instances of the report areretained by default?


A. 5


B. 10


C. 15


D. 20





B.
  10

Explanation:
When an administrator creates a report in Nutanix Prism Central without configuring a retention policy, the system retains 10 instances by default. This behavior ensures that recent report history is available for review while maintaining storage efficiency.

βœ… Why Option B is Correct
Nutanix Prism Central automatically retains the last 10 report instances if no custom retention policy is set.
This default retention limit applies to scheduled reports and manual report generation.
Once the 11th report is created, the oldest report is purged to maintain the 10-instance cap.
This behavior is consistent across Prism Central versions unless explicitly overridden by a retention policy.

πŸ“š Reference:
Nutanix Documentation: Prism Central Reports

❌ Why Other Options Are Incorrect
A. 5
Nutanix does not default to retaining only 5 reports.
This number is too low and would not support adequate historical analysis.
No official documentation supports a 5-instance default.

C. 15
Retaining 15 reports requires a custom retention policy.
The default behavior does not exceed 10 instances unless explicitly configured.
Prism Central allows administrators to increase retention, but it is not automatic.

D. 20
Like option C, retaining 20 reports is only possible through manual configuration.
The system does not default to this value.
Higher retention values may impact storage and performance, which is why Nutanix sets a conservative default.

Summary:
The default retention behavior in Nutanix Prism Central is designed to balance usability and resource efficiency. Retaining 10 report instances without a policy ensures administrators have access to recent data while avoiding unnecessary storage consumption. Options A, C, and D suggest values that either undercut or exceed the default and are not supported unless explicitly configured.

An administrator is trying todelete a protected snapshotbut is unable to do so.
What is the most likely cause?


A. There is an active recovery occurring at that time.


B. Ransomware has encrypted the snapshot.


C. There is an approval policy that was denied.


D. The snapshot has been corrupted.





A.
  There is an active recovery occurring at that time.

Explanation:
In the Nutanix ecosystem, a snapshot that is part of an active data protection operation is locked to maintain data integrity. The most common scenario that prevents deletion is an active recovery or replication job that is currently using that snapshot.

During a Recovery: If a VM is being restored from a specific snapshot, that snapshot is actively in use. Deleting it mid-recovery would corrupt the restore process.

During Replication:
If a snapshot is part of a Protection Domain (PD) and is currently being replicated to a remote site, it is locked until the transfer is complete. Deleting it would break the replication chain.

The system enforces this lock to prevent data loss and ensure the consistency of data protection workflows. The administrator would need to wait for the ongoing operation to complete before the snapshot can be deleted.

Why the Other Options Are Incorrect
B. Ransomware has encrypted the snapshot.
This is highly unlikely. Nutanix snapshots are stored in a proprietary, immutable format at the hypervisor and CVM level, isolated from the guest VM's operating system. Ransomware running inside a VM cannot access or encrypt the underlying storage snapshots.

C. There is an approval policy that was denied.
Approval policies in Nutanix (e.g., via Calm) typically govern the initiation of an action, such as creating a resource or launching a blueprint. They do not generally function as a lock preventing the deletion of an existing, protected object like a snapshot. A denial would prevent the delete job from starting, but the likely cause is a system-enforced lock, not a policy decision.

D. The snapshot has been corrupted.
While possible, this is not the "most likely" cause. A corrupted snapshot would more often result in an error message during a recovery attempt rather than actively preventing a deletion. The system lock due to an active process is a far more frequent and logical reason for the failure.

Reference:
This behavior is consistent with data protection principles in the Nutanix platform. The Nutanix documentation on Data Protection and Snapshot Management describes how snapshots are managed as part of a protection chain. Operations that rely on a specific snapshot will place a temporary lock on it, preventing its deletion until the dependent operation is finished.

An administrator is preparing for afirmware upgradeon a host and wants to manuallymigrate VMs before executing the LCM upgrade. However,one VM is unable to migratewhile others migrate successfully.
Which action would fix the issue?


A. EnableAcropolis Dynamic Scheduling (ADS)at the cluster level.


B. UpdateLink Layer Discovery Protocol (LLDP).


C. DisableAgent VMwithin the VM configuration options.


D. Configurebackplane port groupsthat are assigned to the CVM.





C.
  DisableAgent VMwithin the VM configuration options.

Explanation:
The "Agent VM" setting is a configuration option applied to a specific VM. When this setting is enabled, it pins the VM to its current host and explicitly prevents it from being live-migrated. This is a common cause for a single VM failing to migrate while others succeed.

This feature is often used for:
Licensing: To comply with software licensing that is tied to a specific physical host.
Performance: For applications that are extremely sensitive to any microsecond-level latency introduced during a migration.
Security/Policy: To ensure a VM never moves from a designated, secured host. Disabling this setting for the problematic VM will remove the artificial pinning restriction and allow the manual migration (and subsequent LCM operations) to proceed.

Why the Oher Options Are Incorrect
A. Enable Acropolis Dynamic Scheduling (ADS) at the cluster level.
This is incorrect because ADS is an automated load-balancing feature. While it can initiate migrations, it does not override a hard restriction like the "Agent VM" setting. Furthermore, the administrator is attempting a manual migration, which should work regardless of the ADS cluster setting.

B. Update Link Layer Discovery Protocol (LLDP).
This is incorrect. LLDP is a network protocol used for discovering physical network topology. It is unrelated to the VM migration capability within the Nutanix AHV hypervisor. A misconfiguration here might cause network issues for a migrated VM, but it would not singularly prevent the migration operation itself from starting.

D. Configure backplane port groups that are assigned to the CVM.
This is incorrect. The backplane network is used for internal communication between CVMs and for storage data traffic. It is not involved in the live migration process of user VMs. Reconfiguring it would not resolve a VM-specific migration block.

Reference:
This behavior is documented in the Nutanix AHV configuration guides. The "Agent VM" setting is a well-known attribute that controls VM mobility. It can be viewed and modified in Prism Element under the VM's Settings > Configure > Options. The description for this option typically states that enabling it will prevent the VM from being migrated.

An administrator wants tolive-migrate a vGPU-enabled VMfrom one host to another within the same cluster.
What requirements must be met before initiating the migration?


A. The target host has sufficient resources to support the VM.


B. The vGPU profile needs to be changed.


C. The VM must be configured as an agent VM.


D. The host affinity for the VM must be set to a specific host.





A.
  The target host has sufficient resources to support the VM.

Explanation:
Live migration of a vGPU-enabled VM has specific and stringent requirements. The most fundamental requirement is that the destination host must have the necessary physical resources to accommodate the VM. This includes:

Identical vGPU Profile: The target host must have an available GPU with the exact same vGPU profile type and version (e.g., NVIDIA A100-1B, vGPU software version 15.x). The vGPU profile cannot be changed during a live migration.

Available vGPU Capacity: There must be a free vGPU instance of that specific profile on the target host's physical GPU.
Standard Resources: The target host must also have sufficient standard resources like CPU, memory, and network connectivity.
If any of these resource requirements are not met on the target host, the live migration pre-check will fail, and the operation will not start. Ensuring the target host has sufficient and compatible resources is the primary prerequisite.

Why the Other Options Are Incorrect
B. The vGPU profile needs to be changed.
This is incorrect and would actually prevent the migration. A live migration for a vGPU-enabled VM requires the vGPU profile to remain identical on both the source and destination hosts. Changing the profile is not supported during a live migration and would require a power-off operation.

C. The VM must be configured as an agent VM.
This is incorrect. Configuring a VM as an "Agent VM" pins it to its current host and prevents any migration, live or otherwise. This setting would be the cause of a migration failure, not a requirement for it to succeed.

D. The host affinity for the VM must be set to a specific host.
This is incorrect. Host affinity rules can suggest or require a VM to run on a specific host or group of hosts. For a migration to a specific target host to succeed, the affinity rule must allow it to run there, but setting an affinity rule is not a general requirement for the migration functionality itself. In fact, a restrictive "should run on" or "must run on" rule for the source host could prevent the migration.

Reference:
This process is covered in the Nutanix documentation for managing vGPUs on AHV. The guides explicitly state that for vGPU VM live migration, the destination host must have a GPU with the same type and an available instance of the identical vGPU profile. The live migration workflow includes a pre-check that validates these conditions before allowing the operation to proceed.

An administrator is working with anetwork engineerto design thenetwork architecture for a DR failover.
BecauseDNS is well-designed, theDR site will use a different subnetbutretain the same last octetin the IP address.
What is the best way to achieve this?


A. Use a custom script to update the IP address after instantiation in DR.


B. Set up IPAM so the address is dynamically assigned during DR.


C. Manually log into VMs after the DR event and update the last octet.


D. Utilize Recovery Plan Offset-based IP mapping.





D.
  Utilize Recovery Plan Offset-based IP mapping.

Explanation:
This scenario is a perfect use case for the Offset-based IP mapping feature within Nutanix Leap Recovery Plans. Here's how it works:
The Goal: The VM's IP address at the DR site should be in a different subnet but retain the same last octet (host portion). For example:
Production IP: 192.168.1.50
DR IP: 10.10.1.50

How Offset Mapping Works: You configure the recovery plan to apply a network map using an "offset." The offset is applied to the first three octets (the network portion) of the IP address, while the last octet remains unchanged. In the example above, you would define an offset that transforms 192.168.1.x to 10.10.1.x.

Why it's the Best Method: This method is automated, reliable, and integrated directly into the Nutanix Leap disaster recovery workflow. It requires no manual intervention, custom scripting, or post-failover login, ensuring a swift and consistent recovery process.

Why the Other Options Are Incorrect
A. Use a custom script to update the IP address after instantiation in DR.
This is less efficient and less reliable. While possible, it introduces a potential point of failure. The script would need to be maintained, tested, and executed successfully during the high-stress event of a DR failover. The native offset mapping feature is a more robust and supported solution.

B. Set up IPAM so the address is dynamically assigned during DR.
This is incorrect for this specific requirement. Dynamic assignment (e.g., via DHCP) does not guarantee that a VM will receive an IP address with the same last octet. The IP would be assigned from the DHCP pool's available leases, which is unpredictable and does not meet the design goal.

C. Manually log into VMs after the DR event and update the last octet.
This is the worst option. It is slow, error-prone, and not scalable. In a real DR scenario, time is critical, and manually reconfiguring dozens or hundreds of VMs is impractical and defeats the purpose of an automated recovery plan.

Reference:
This functionality is a core feature of Nutanix Leap Recovery Plans. The Nutanix Leap Administration Guide details how to configure network mapping, specifically describing the "Offset" option as the method to change the network portion of an IP address while preserving the host portion, which is exactly what the administrator needs to achieve.

An administrator wants toensure that user VMs on AHV hostscan take advantage ofbandwidth beyond a single adapter in a bond.
Which uplink Bond Type should the administrator configure to accomplish this?


A. No Uplink Bond


B. Active-Active


C. Active-Active with MAC pinning


D. Active-Backup





B.
  Active-Active

Explanation:
The goal is to allow a user VM's network traffic to utilize the combined bandwidth of multiple physical network adapters (NICs) in a bond.

How Active-Active Works: In an Active-Active bond, all physical uplinks are operational and can pass traffic simultaneously. The AHV hypervisor load-balances traffic from all VMs across the available active links. This allows the aggregate network throughput for the host to be the sum of the bandwidth of its active uplinks.

Achieving the Goal: Because traffic from the collective VMs is distributed across all links in the bond, user VMs, as a group, can indeed utilize bandwidth beyond what a single adapter provides. A single VM's traffic flow may be limited to one link, but the overall host and its VMs benefit from the total bonded bandwidth.

Why the Other Options Are Incorrect
A. No Uplink Bond:
This configuration does not combine adapters. Each NIC operates independently. A VM is tied to a specific NIC, and its bandwidth is strictly limited to the capacity of that single adapter, thus failing the requirement.

C. Active-Active with MAC pinning:
This is a specific type of Active-Active bond. While it provides redundancy and increased aggregate bandwidth, it uses a load-balancing algorithm that pins all traffic from a specific source MAC address (like a vNIC) to a single physical uplink. This means that a single user VM's traffic is limited to the bandwidth of one adapter, even though other VMs can use other adapters. It does not allow a single VM to burst beyond a single adapter's limit.

D. Active-Backup:
In this mode, only one uplink is active at a time. The other uplinks are standby links that only become active if the primary fails. This configuration provides high availability but does not increase bandwidth, as only a single adapter is ever in use.

Reference:
The Nutanix AHV Networking Guide details the different bond modes. It explains that the Active-Active bond mode (specifically using LACP, 802.3ad) is designed for both high availability and increased bandwidth by aggregating the throughput of multiple physical links. The guide also clarifies that "Active-Active with MAC pinning" balances load per VM, which can limit the per-VM throughput to a single link's capacity. For maximizing aggregate host bandwidth for all VMs, the standard Active-Active (LACP) bond is the correct choice.

ADisaster Recovery administratorhas set up aProtection Policy for 50 workloads, all configured similarly.
TheRPO is 60 minuteswith aspecified retention of 10 local copies, 5 remote copies, and crash consistency.
After activation,recovery points are not appearing at the DR site, even though they arevisible on the production side.
What is the most likely issue?


A. Nutanix Guest Tools (NGT) is not installed on the source VMs.


B. Windows updates need to be applied to all affected VMs.


C. The storage container name on the DR cluster does not match the production cluster.


D. The storage container RF factor does not match in both clusters.





C.
  The storage container name on the DR cluster does not match the production cluster.

Explanation:
This is a classic and common configuration error when setting up Nutanix Metro Availability or Async DR protection. The key symptom is that snapshots are successfully created locally (visible on production) but are not replicated to the remote site.
For replication to occur, the Protection Domain must map the source storage container to a specific target storage container on the remote cluster. This mapping is based on the storage container name.

The Issue: If the storage container on the DR cluster has a different name than the one on the production cluster, the Protection Domain cannot find a valid target for replication. The local snapshots are created successfully according to the policy (explaining their local presence), but the replication task fails silently or is skipped because the destination is invalid.
The Fix: The administrator must ensure that a storage container with the exact same name exists on the DR cluster.

Why the Other Options Are Incorrect
A. Nutanix Guest Tools (NGT) is not installed on the source VMs.
This is incorrect. NGT is required for application-consistent snapshots, which allow for transactionally consistent backups of applications like SQL or Exchange. The scenario explicitly states the policy is set for crash consistency. Crash-consistent snapshots are performed at the hypervisor level and do not require NGT. Since local snapshots are being created, the replication failure is unrelated to VM guest tools.

B. Windows updates need to be applied to all affected VMs.
This is irrelevant to the replication mechanism. Operating system updates have no bearing on the ability of the AHV and Data Protection services to replicate snapshots to a remote cluster.

D. The storage container RF factor does not match in both clusters.
This is incorrect. The Resilience Factor (RF) is a local data redundancy setting (e.g., RF2 stores two copies of data within the same cluster). The RF setting on the production cluster and the DR cluster are independent and do not need to match for replication to function. A mismatch would not prevent recovery points from appearing at the DR site.

Reference:
This prerequisite is explicitly stated in the Nutanix Data Protection and Disaster Recovery documentation. When configuring a Protection Domain for remote replication, the setup wizard requires you to map source containers to target containers. The official guidance is that the target container must exist and, for a straightforward setup, must have the same name as the source container to ensure successful replication.

An administrator configured aremote site for Protection Domain replication, butnetwork performance and stabilityare impacted.
How can the remote site configuration be adjusted to fix the issue?


A. ConfigureNetwork Address Translation (NAT)between the two Nutanix clusters.


B. Configure theProtection Domain with many-to-many replication.


C. Configure aBandwidth Throttling Policy.


D. Configure theremote Cluster VIP as a proxy.





C.
  Configure aBandwidth Throttling Policy.

Explanation:
Protection Domain (PD) replication can consume significant bandwidth, which can compete with other production applications and lead to network congestion, poor performance, and instability.

A Bandwidth Throttling Policy is the direct and intended tool to manage this exact issue. It allows an administrator to control the amount of network bandwidth that PD replication is allowed to use. This can be configured in several ways:

Schedule-Based Throttling: Limit bandwidth during peak business hours and allow full replication during off-peak periods.
Absolute Throttling: Cap the replication traffic to a specific maximum bandwidth (e.g., 100 Mbps) at all times.
By implementing a throttling policy, the administrator can ensure that replication does not saturate the network link, thereby restoring stability and performance for other critical services.

Why the Other Options Are Incorrect
A. Configure Network Address Translation (NAT) between the two Nutanix clusters.
This is incorrect. NAT is used to translate IP addresses between different network domains and is not a tool for performance or stability management. It adds complexity and is generally not required or recommended for native PD replication, which expects direct IP connectivity or routing.

B. Configure the Protection Domain with many-to-many replication.
This is incorrect and would likely make the problem worse. Many-to-many replication increases the complexity and potential volume of data being synchronized between multiple sites. It does not address performance bottlenecks; it adds more potential traffic flows that need to be managed.

D. Configure the remote Cluster VIP as a proxy.
This is incorrect. The Cluster Virtual IP (VIP) is a single, stable access point for managing the cluster. It is not a replication proxy or a traffic-shaping tool. Configuring it as a "proxy" is not a standard or supported method for resolving network performance issues caused by replication traffic.

Reference:
The Nutanix Data Protection Guide specifically covers Bandwidth Throttling Policies as a feature to control the network impact of replication. Administrators can create and assign these policies to Protection Domains within Prism Element to precisely manage the trade-off between replication speed and network availability for other workloads. This is the prescribed best practice for mitigating the described performance impact.

An administrator needs to enableWindows Defender Credential Guardto comply with company policy.
The new VM configurations include:

Legacy BIOS
4 vCPUs
8 GB RAM
Windows Server 2019

What must be changed in order to properly enable Windows Defender Credential Guard?


A. Update vCPU to 8.


B. Enable UEFI with Secure Boot.


C. Use Windows Server 2022.


D. Update Memory to 16GB.





B.
  Enable UEFI with Secure Boot.

Explanation:
Windows Defender Credential Guard is a security feature that uses virtualization-based security (VBS) to isolate secrets. It has specific hardware and firmware prerequisites that the current VM configuration does not meet.
The critical missing component is the firmware type. Credential Guard requires:

UEFI Boot: The VM must be configured to use the Unified Extensible Firmware Interface (UEFI) instead of the legacy BIOS.
Secure Boot:Secure Boot, a feature of UEFI, must be enabled. This ensures that only signed, trusted operating system loaders can start, which is a foundational security requirement for VBS and Credential Guard.
The provided configuration uses Legacy BIOS, which is incompatible and must be changed to UEFI with Secure Boot before Credential Guard can be enabled.

Why the Other Options Are Incorrect
A. Update vCPU to 8. This is incorrect.
While Credential Guard and VBS consume some CPU resources, there is no specific vCPU count prerequisite. The existing 4 vCPUs are sufficient to enable and run the feature. Adding more vCPUs may improve performance but is not a requirement for enabling the feature itself.

C. Use Windows Server 2022. This is incorrect.
Windows Defender Credential Guard is available and fully supported on Windows Server 2016 and later, including Windows Server 2019. Upgrading the OS is not necessary to meet the enabling criteria.

D. Update Memory to 16GB. This is incorrect.
While VBS does reserve a small portion of RAM (typically around 1 GB) for the secure "Virtual Trust Level" environment, the total system memory requirement is not strictly enforced at 16 GB. The existing 8 GB of RAM is generally sufficient to enable Credential Guard on a VM, especially if it is not under heavy load. The primary and absolute blocker is the firmware configuration, not the memory size.

Reference:
The official Microsoft documentation for "Enable Windows Defender Credential Guard" explicitly lists the prerequisites. It states that the system must have UEFI firmware version 2.3.1 or higher and that Secure Boot must be enabled. The requirement for UEFI and Secure Boot is non-negotiable and is the most common reason for failure when attempting to enable the feature on a VM that was created with the default Legacy BIOS setting.

An administrator needs tocreate a storage containerforVM disks. The container must meet the following conditions:

10 GiB of the total allocated space must not be used by other containers.
The container must have a maximum storage capacity of 500 GiB.

What settings should the administrator configure while creating the storage container?


A. Set Advertised Capacity to 10 GiB and Reserved Capacity to 500 GiB.


B. Set Advertised Capacity to 10 GiB.


C. Set Reserved Capacity to 500 GiB.


D. Set Reserved Capacity to 10 GiB and Advertised Capacity to 500 GiB.





D.
  Set Reserved Capacity to 10 GiB and Advertised Capacity to 500 GiB.

Explanation:
The two key Nutanix storage container settings involved are:

Reserved Capacity: This is a guarantee of physical storage space for the container. The cluster will set aside this amount of space, and it will not be available for use by any other container. This satisfies the first requirement: "10 GiB of the total allocated space must not be used by other containers."

Advertised Capacity:
This is the maximum logical size that the storage container can report to the hypervisor and VMs. It acts as a hard quota, preventing the container from growing beyond this limit. This satisfies the second requirement: "The container must have a maximum storage capacity of 500 GiB."
Therefore, configuring Reserved Capacity = 10 GiB and Advertised Capacity = 500 GiB meets both conditions precisely.

Why the Other Options Are Incorrect
A. Set Advertised Capacity to 10 GiB and Reserved Capacity to 500 GiB.
This is backwards and incorrect. It would guarantee 500 GiB of physical space for this container (which is overkill and wasteful) while only allowing VMs to see a maximum of 10 GiB of storage, making the container useless.

B. Set Advertised Capacity to 10 GiB.
This only enforces the maximum capacity but provides no space guarantee. The 10 GiB of space would not be reserved and could be used by other containers, violating the first requirement.

C. Set Reserved Capacity to 500 GiB.
This only guarantees 500 GiB of physical space but imposes no upper limit on the container's logical size. The container could grow beyond 500 GiB, violating the second requirement.

Reference:
This functionality is defined in the Nutanix Acropolis Storage documentation. The concepts of Reserved Capacity (the guaranteed physical allocation) and Advertised Capacity (the logical quota) are fundamental to storage container configuration and quality-of-service (QoS) management on the Nutanix platform.

An administrator is configuringProtection Policiesto replicate VMs to aNutanix Cloud Cluster (NC2)over the internet.
To comply with security policies, how should data be protected during transmission?


A. Configure Data on a self-encrypting drive.


B. Configure VMs to use UEFI Secure Boot.


C. Enable Data-at-Rest Encryption.


D. Enable Data-in-Transit Encryption.





D.
  Enable Data-in-Transit Encryption.

Explanation:
When an administrator configures Protection Policies to replicate virtual machines (VMs) to a Nutanix Cloud Cluster (NC2) over the internet, the main security concern is ensuring that data traveling between the on-premises cluster and NC2 is protected from interception or tampering.

The correct and recommended solution for this scenario is to enable Data-in-Transit Encryption (DTI). This feature ensures that all data sent between Nutanix clusters is encrypted while moving over the network β€” whether it’s replication, backup, or disaster recovery traffic.

πŸ”’ Why Option D is Correct
Data-in-Transit Encryption (DTI) uses industry-standard Transport Layer Security (TLS 1.2/1.3) to encrypt communication between:
Nutanix clusters (on-premises to NC2, or site-to-site),
Controller VMs (CVMs),
Replication and protection domains.

When DTI is enabled:
All traffic for replication, backups, and snapshots is encrypted end-to-end.
It prevents man-in-the-middle attacks or data leaks while data moves across public or untrusted networks like the internet.
It works automatically with Nutanix Protection Policies and Async DR or Metro Availability setups.
This is critical for compliance with security frameworks such as ISO 27001, NIST, and GDPR, which mandate encryption of data in transit.

Enabling DTI in Prism Central or Prism Element ensures:
Data confidentiality: Information remains private while in motion.
Data integrity: No modification of data packets during transmission.
Compliance: Meets enterprise and regulatory encryption standards.

βš™οΈ How It Works
The administrator enables Data-in-Transit Encryption in Prism Element β†’ Settings β†’ Security Configuration β†’ Data-at-Rest and Data-in-Transit Encryption.
TLS certificates are automatically managed by Nutanix or can be integrated with enterprise certificate authorities (CAs).
Once enabled, the system encrypts replication and API traffic using secure channels.
No changes are required on the VM side β€” the encryption is handled by the Nutanix storage and replication subsystems.

❌ Why the Other Options Are Incorrect
A. Configure Data on a self-encrypting drive (SED):
Self-encrypting drives protect data at rest (stored on physical SSDs/HDDs).
They do not protect data while it travels over the internet between clusters.
This method addresses disk theft or physical compromise, not transmission security.


B. Configure VMs to use UEFI Secure Boot:
UEFI Secure Boot validates the boot loader and OS to prevent malware or tampering during system startup.
It enhances VM-level integrity, not data encryption or network security.
This feature does not encrypt or secure replication traffic.



C. Enable Data-at-Rest Encryption:
Data-at-Rest Encryption (DARE) encrypts stored data on disks within a Nutanix cluster.
It protects data if a drive or node is stolen but offers no protection for data leaving the cluster.
DARE and DTI are complementary β€” DARE protects stored data; DTI protects transmitted data.

🧭 Nutanix Best Practice
When replicating VMs to Nutanix Cloud Clusters (NC2):
Always enable Data-in-Transit Encryption to protect data across WAN or internet links.
Combine DTI with Data-at-Rest Encryption for complete end-to-end protection.
Use Prism Central β†’ Security Configuration β†’ Encryption to verify both are active.
Regularly validate certificates and encryption compliance through Nutanix Security Central.

🧩 Example Scenario
An organization replicates VMs between an on-premises cluster in Lahore and an NC2 cluster in AWS Virginia using Protection Policies.
Without DTI, replication traffic travels over the internet unencrypted, exposing it to potential interception.
By enabling Data-in-Transit Encryption, all communication between clusters is encrypted using TLS, ensuring full compliance and security of DR data flow.

πŸ“˜ References:
Nutanix Security Guide – Data-in-Transit Encryption

βœ… Summary:
When replicating workloads to Nutanix Cloud Clusters (NC2) over the internet, enabling Data-in-Transit Encryption is essential to secure communication channels and comply with enterprise security policies. It ensures all replication traffic is encrypted using TLS, protecting data from interception or manipulation during transmission.


Page 1 out of 7 Pages