NCP-CN Practice Test Questions

95 Questions


Prepare the Environment for an NKP Deployment

A company uses an Artifactory private registry for development. The NKP deployment must use this private registry since the Security Administrator has the firewall configured to reject connections to public container registries. The first task is to push the NKP bundle to this private registry.
What options should be used to push the NKP bundle to this private registry?


A. --registry-mirror-url, --registry-mirror-username and --registry-mirror-password


B. --mirror-url, --mirror-username and --mirror-password


C. --registry-url, --registry-username and --registry-password


D. --to-registry, --to-registry-username and --to-registry-password





D.
  --to-registry, --to-registry-username and --to-registry-password

Explanation
The task is to push the NKP air-gapped bundle to a private registry (Artifactory) so that the NKP deployment can pull all necessary container images from it. The command for this operation is nkp push bundle.

Why Option D is Correct:
The standard and documented flags for the nkp push bundle command to specify the destination private registry are:

--to-registry: The URL of the target private registry (e.g., artifactory.company.com).

--to-registry-username: The username with push/write access to the registry.

--to-registry-password: The password or token for the specified user.

Why the Other Options are Incorrect:

A. --registry-mirror-url, --registry-mirror-username and --registry-mirror-password:
These flags are typically used in different contexts, such as configuring a pull-through cache or mirror for Docker or containerd during node setup. They are not the primary flags for the nkp push bundle command.

B. --mirror-url, --mirror-username and --mirror-password:
This is a simplified or incorrect version of the mirror flags and is not the valid set of parameters for the nkp push bundle command.

C. --registry-url, --registry-username and --registry-password:
These flags are ambiguous. They lack the critical --to- prefix that explicitly defines the destination of the push operation. The nkp CLI uses the --to-registry* flags to avoid this ambiguity.

Reference / Key Concept
This question tests knowledge of the specific syntax for the nkp push bundle command, which is a critical step in preparing an air-gapped or restricted NKP environment.

Command Purpose:
nkp push bundle takes the contents of the extracted NKP air-gapped bundle and uploads all the container images to a designated private registry.

Flag Semantics:
The use of --to-* is a common CLI convention to clearly indicate the target of an operation. In this case, --to-registry unambiguously means "push the bundle TO this registry."

Air-Gapped Workflow:

The standard sequence is:

Extract the bundle tar.gz file.

Load the bootstrap image locally (docker load).

Push the bundle to the private registry using nkp push bundle --bundle --to-registry ....

Create the cluster, which will be configured to pull from this registry.

Using the correct --to-registry, --to-registry-username, and --to-registry-password flags is essential for successfully populating the private registry and enabling the subsequent NKP deployment.

A Platform Engineer is attempting to delete an attached cluster from the NKP UI, but it is stuck in a 'deleting' state and does not get removed. How can the engineer resolve this attempt to detach the cluster so that it is removed from the UI and no longer managed by NKP?


A. Run the kubectl delete cluster command in the context of the NKP management cluster.


B. Run the nkp delete kommandercluster command in the context of the NKP attached cluster.


C. Run the kubectl delete kommandercluster command in the context of the NKP management cluster


D. Run the nkp delete cluster command in the context of the NKP attached cluster.





C.
  Run the kubectl delete kommandercluster command in the context of the NKP management cluster

Explanation
When a cluster is attached to NKP, its lifecycle is managed by a Custom Resource (CR) on the NKP management cluster. The specific CR type for an attached cluster is KommanderCluster. The NKP UI interacts with this resource object. If a deletion gets stuck, it is often because the finalizers on this resource are blocked, and a manual command is needed to force its removal.

Why Option C is Correct:

Context:
The operation must be performed on the NKP management cluster because that is where the controlling resource (KommanderCluster) exists.

Command:
The kubectl delete kommandercluster command directly targets the custom resource that represents the attached cluster. This will attempt to remove the resource and its finalizers, which should clear the stuck 'deleting' state and remove the cluster from the UI. If it hangs, the --force --grace-period=0 flags can be used to force the deletion.

Why the Other Options are Incorrect:

A. Run the kubectl delete cluster command in the context of the NKP management cluster.
The Cluster CR is used for clusters provisioned by NKP via Cluster API (CAPI), not for clusters that are attached. An attached cluster is represented by a KommanderCluster resource, not a Cluster resource.

B. Run the nkp delete kommandercluster command in the context of the NKP attached cluster.
There is no nkp delete kommandercluster command. The nkp CLI is used for lifecycle operations on NKP-provisioned clusters, not for managing the CRs of attached clusters. Furthermore, the command must be run on the management cluster, not the attached cluster.

D. Run the nkp delete cluster command in the context of the NKP attached cluster.
This is incorrect for several reasons. First, the nkp CLI is not intended to be run from an attached cluster for management tasks. Second, this command is used to delete a cluster that was provisioned by NKP, not one that was merely attached.

Reference / Key Concept
This question tests the understanding of how NKP manages attached clusters and how to troubleshoot a stuck Kubernetes resource deletion.

Attached Cluster Representation:
An attached cluster is represented in the NKP management cluster by a KommanderCluster custom resource. Deleting the cluster from the UI triggers the deletion of this resource.

Finalizers:
Kubernetes resources often have finalizers that must complete before the resource can be deleted. If a finalizer is blocked (e.g., due to a network issue communicating with the target cluster), the resource will be stuck in a 'deleting' state.

Troubleshooting Stuck Deletions: The standard procedure is to:

Use kubectl on the management cluster.

Identify the correct Custom Resource Definition (CRD) for the stuck resource (in this case, kommanderclusters.kommander.d2iq.com).

Manually delete the resource instance using kubectl delete kommandercluster .

By directly deleting the KommanderCluster resource on the management cluster, the engineer removes the source of truth for that attached cluster, resolving the stuck state and removing it from the UI.

Using an NKP Ultimate license, a Platform Engineer has created a new workspace and needs to create a new Kubernetes cluster within this workspace. However, the engineer discovers that the Create Cluster option is grayed out, as shown in the exhibit. How should the engineer resolve this issue?


A. Create the cluster only using YAML and not the GUI.


B. Attach existing clusters instead of creating a new cluster.


C. Create an Infrastructure provider for the workspace


D. Ensure NKP is upgraded to a minimum version of 2.12.





C.
  Create an Infrastructure provider for the workspace

Explanation
In NKP, a workspace is a logical boundary for managing clusters and resources. To create a new cluster within a workspace (as opposed to attaching an existing one), the workspace must be configured with an Infrastructure Provider. This provider supplies the credentials and configuration (e.g., for Nutanix AHV, vSphere, AWS, etc.) that NKP needs to provision virtual machines and build the cluster.

Why Option C is Correct:
The "Create Cluster" button will be disabled (grayed out) in a workspace until an infrastructure provider is successfully added to that specific workspace. This is a fundamental prerequisite. The engineer needs to navigate to the workspace's settings, add a provider (by selecting the cloud/infrastructure type and providing the necessary credentials like API endpoints, username, and password), and then the option to create a new cluster will become available.

Why the Other Options are Incorrect:

A. Create the cluster only using YAML and not the GUI.
While it might be technically possible to apply a Cluster API YAML manifest directly via kubectl, this bypasses the intended NKP management workflow and does not address the root cause. The GUI is disabled for a specific, configurable reason—the lack of an infrastructure provider. The correct action is to fulfill that prerequisite within the platform.

B. Attach existing clusters instead of creating a new cluster.
The "Attach Cluster" function is separate from the "Create Cluster" function. Attaching a cluster imports an already-running Kubernetes cluster into the workspace for management. The question states the engineer's goal is to create a new cluster, and this option does not resolve why the "Create" button is disabled.

D. Ensure NKP is upgraded to a minimum version of 2.12.
The issue is not related to the NKP version. The behavior described—the "Create Cluster" button being disabled until an infrastructure provider is configured—is standard functionality across multiple versions of NKP. An upgrade would not resolve this specific configuration issue.

Reference / Key Concept
This question tests the understanding of NKP workspace configuration and the prerequisites for cluster provisioning.

Workspace Configuration:
Workspaces in NKP are not just visual groupings; they are secure, multi-tenant boundaries that require explicit configuration for the actions they are permitted to perform.

Infrastructure Provider:
This is a set of credentials and configuration stored within a workspace (or shared from a global context) that grants NKP the permission to create and manage resources (VMs, disks, networks) on a specific cloud or virtualization platform.

UI/UX Logic:
The NKP UI dynamically enables or disables features based on the configured capabilities of the workspace. The "Create Cluster" option remains disabled as a clear indicator that a essential piece of configuration—the infrastructure provider—is missing.

In summary, the engineer must configure the underlying infrastructure (Nutanix, vSphere, etc.) that the new cluster will be built upon before the NKP UI will allow the cluster creation process to begin. This is done by adding an Infrastructure Provider to the workspace.

A Kubernetes administrator has been tasked with deploying a new cluster to AWS. The administrator has received the following requirements for this deployment:
Region us-east-1
AMI rhel8.6What is a requirement for deploying a new cluster in AWS?


A. Use --dry-run parameter


B. Use --ami-format parameter


C. Set an export AWS_REGION


D. Set an export KUBECONFIG





C.
  Set an export AWS_REGION

Explanation
When deploying an NKP cluster to a cloud provider like AWS, the NKP CLI and underlying tools need to know which specific region to operate in, as this determines the location of the compute, storage, and networking resources.

Why Option C is Correct:
The AWS CLI and SDKs (which NKP uses to interact with AWS) rely on environment variables for fundamental configuration. The AWS_REGION environment variable is a primary way to specify the default AWS region (e.g., us-east-1) for API requests. Without this being set, the deployment process would fail because it wouldn't know where to create EC2 instances, VPCs, and other resources. This is a foundational prerequisite for any AWS automation.

Why the Other Options are Incorrect:

A. Use --dry-run parameter:
The --dry-run flag is used to simulate a command without actually making any changes. It is a useful validation step but is not a requirement for a successful deployment.

B. Use --ami-format parameter:
This is not a standard or required parameter for an NKP on AWS deployment. While you can specify an AMI, the format or the flag used is not "ami-format". The requirement for a RHEL 8.6 AMI would be handled by a different, specific flag in the cluster configuration.

D. Set an export KUBECONFIG:
The KUBECONFIG environment variable tells the kubectl command where to find the configuration for an existing Kubernetes cluster. It is used to interact with a cluster after it has been deployed. It is not a requirement for the deployment process itself. In fact, the deployment process creates the kubeconfig file for the new cluster.

Reference / Key Concept:
This question tests the understanding of the prerequisites for deploying Kubernetes on a public cloud using infrastructure-as-code principles.

Cloud Provider Authentication & Configuration:
Before you can create resources in a cloud like AWS, you must configure access. This typically involves:

Credentials:
Setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Region:
Setting AWS_REGION or AWS_DEFAULT_REGION to define the target geographical location for the deployment, as specified in the requirements (us-east-1).

The requirement for a specific AMI (rhel8.6) would be fulfilled within the NKP cluster configuration file (e.g., cluster.yaml), not by a generic environment variable or a --dry-run flag. The most fundamental and universal requirement from the list is correctly setting the AWS_REGION.

An ecommerce company decides to apply an autoscaling configuration in its NKP cluster due to the fact that on holidays, they experience service drops due to a huge increase of simultaneous traffic.
Which statement best describes the configuration shown in the exhibit?


A. The autoscaler could have 15 or 3 nodes.


B. The autoscaler could increase the number of nodes as needed, but never reduce it below 3.


C. The autoscaler could increase the number of nodes up to 15, but never reduce the number below 3.


D. The autoscaler could increase the number of nodes up to 3, but never reduce it below 15.





C.
  The autoscaler could increase the number of nodes up to 15, but never reduce the number below 3.

Explanation
The question describes an autoscaling configuration for a Node Pool. In Kubernetes cluster autoscalers (like the one used in NKP), the key parameters that define the scaling behavior are the minimum, maximum, and initial node counts.

Why Option C is Correct:
This statement accurately describes the standard function of an autoscaler based on the parameters mentioned in the question. The "huge increase of simultaneous traffic" necessitates scaling up, and the need to maintain service implies a need to scale down when the traffic subsides, but not below a certain baseline.

"Increase the number of nodes up to 15" refers to the maximum node count. This is the hard limit the autoscaler will not exceed, preventing uncontrolled cost growth.

"Never reduce the number below 3" refers to the minimum node count. This ensures there is always a baseline cluster capacity to handle steady-state traffic and run critical system pods, preventing service drops from having too few nodes.

Why the Other Options are Incorrect:

A. The autoscaler could have 15 or 3 nodes. This is incorrect and far too simplistic.
The autoscaler can operate at any number of nodes between the minimum and maximum (e.g., 3, 4, 5, ... , 14, 15), not just at the two extremes. It dynamically adjusts based on the load.

B. The autoscaler could increase the number of nodes as needed, but never reduce it below 3.
The first part of this statement is dangerously vague. Without a defined maximum (implied to be 15 in the correct answer), the autoscaler could theoretically scale infinitely, leading to exorbitant costs. A maximum limit is a critical safety control.

D. The autoscaler could increase the number of nodes up to 3, but never reduce it below 15.
This is logically impossible and backwards. You cannot have a minimum that is higher than the maximum. This configuration would be invalid.

Reference / Key Concept
This question tests the understanding of Kubernetes Cluster Autoscaler configuration parameters.

Minimum Size:
The smallest number of nodes the node pool is allowed to have. This ensures resource availability for baseline workloads and cluster services.

Maximum Size:
The largest number of nodes the node pool is allowed to have. This is a critical cost-control mechanism.

Scaling Logic:
The autoscaler continuously monitors unschedulable pods (pods that cannot run due to insufficient resources). If there are unschedulable pods, it scales up (adds nodes) until the pods can be scheduled or it hits the maximum. Conversely, if nodes are underutilized, it scales down (removes nodes) after ensuring their pods can be rescheduled elsewhere, but it will never go below the minimum.

For the ecommerce company, the configuration described in option C allows the cluster to handle holiday traffic spikes by scaling up to 15 nodes while maintaining a cost-effective baseline of 3 nodes during off-peak times.

A Platform Engineer is looking to backup and restore persistent volumes and other cluster resources. Which two NKP licenses include backup and restore functionality? (Choose two.)


A. NKP Starter


B. NKP Essential


C. NKP Ultimate


D. NKP Pro





C.
  NKP Ultimate

D.
  NKP Pro

Explanation
Nutanix Kubernetes Platform (NKP) offers its advanced data management features, including backup and restore for persistent volumes and cluster resources, in its higher-tier licenses. These features are typically powered by integrated technology from Nutanix Mine for backups.

Why Options C and D are Correct:

NKP Pro:
This mid-tier license includes core platform services and, importantly, adds data services like backup and restore. It is the first tier where this functionality becomes available.

NKP Ultimate:
This is the top-tier license that includes all features of NKP Pro and adds advanced capabilities for multi-cluster and global management (via Kommander), service mesh, and enhanced security. Since it includes all Pro features, backup and restore is also included.

Why the Other Options are Incorrect:

A. NKP Starter:
This is the entry-level, free tier of NKP. It provides basic Kubernetes cluster lifecycle management but does not include advanced data services like backup and restore.

B. NKP Essential:
This is not a standard NKP licensing tier. The common tiers are Starter, Pro, and Ultimate. Therefore, it is not a correct answer.

Reference / Key Concept
This question tests knowledge of the NKP licensing tiers and their feature sets.

Feature Tiers:
Nutanix structures its NKP offerings to enable customers to start with basic functionality and scale up to more advanced enterprise features.

Starter:
Basic cluster management (create, scale, upgrade).

Pro:
Adds platform applications (monitoring, logging) and data services (backup/restore).

Ultimate:
Adds multi-cluster management, GitOps, service mesh, and security features, building upon all Pro features.

Therefore, to have access to backup and restore functionality for persistent volumes, a company must be licensed for either NKP Pro or NKP Ultimate.

An administrator has experienced issues with an NKP-managed workload cluster and has been tasked with deploying NKP Insights in order to:
Resolve common anomalies
Check security issues
Verify whether workloads follow best practicesUpon trying to enable NKP Insights, the cluster that needs to be chosen is grayed out.Which missing prerequisite should be enabled?


A. Velero


B. Cert-manager


C. Nutanix Objects


D. Rook Ceph





B.
  Cert-manager

Explanation
NKP Insights is a service that collects diagnostic data from your clusters and sends it to Nutanix for analysis to provide recommendations on security, best practices, and anomalies. For this service to function, it requires a secure, encrypted communication channel.

Why Option B is Correct:
The cert-manager is a Kubernetes add-on that automates the management and issuance of TLS certificates from various issuing sources. NKP Insights relies on cert-manager to automatically create and renew the TLS certificates needed for its components to communicate securely with the Nutanix backend services. If cert-manager is not installed and running on the management cluster, the NKP Insights service cannot establish this required secure connection, and the option to enable it will be disabled (grayed out).

Why the Other Options are Incorrect:

A. Velero:
Velero is used for backup and restore operations of Kubernetes cluster resources and persistent volumes. While it is included in higher NKP tiers (Pro/Ultimate), it is not a direct prerequisite for the NKP Insights service to be enabled.

C. Nutanix Objects:
This is Nutanix's object storage solution (S3-compatible). NKP Insights may use object storage for data, but it is not the direct prerequisite that enables the feature. The initial secure handshake and API communication are more critical and are handled by certificates from cert-manager.

D. Rook Ceph:
This is a storage orchestrator for Ceph, providing file, block, and object storage. It is not a prerequisite for NKP Insights. NKP Insights is an analytics and reporting tool, not a storage-intensive application that would require a specific storage class provider like Rook Ceph to be enabled.

Reference / Key Concept:
This question tests the understanding of the prerequisites for enabling specific NKP platform applications, particularly those that require external communication.

TLS Certificates for Secure Communication:
In modern cloud-native applications, especially those that transmit diagnostic data, all communication must be encrypted using TLS. Cert-manager is the de facto standard tool in Kubernetes for managing this complexity automatically.

NKP Insights Dependency:
The official Nutanix documentation for enabling NKP Insights explicitly lists cert-manager as a required component. Without it, the service cannot guarantee secure data transmission, and therefore the enablement option is logically disabled in the UI.

In summary, when an NKP feature that requires external API communication is grayed out, one of the first things to check is whether cert-manager is installed and functional on the management cluster.

A Cloud Engineer is deploying an NKP Cluster in AWS. The environment is for testing purposes only, so the AWS team has requested it be deployed to use a minimal set of system resources to reduce cloud subscription fees. Which two parameters should be specified when initializing a Kommander installation, using the nkp install kommander command set? (Choose two.)


A. --request-timeout


B. --wait-timeout


C. --yaml


D. --init





B.
  --wait-timeout

D.
  --init

Explanation
The goal is to deploy a minimal NKP Kommander cluster in AWS to reduce resource consumption and cost. The nkp install kommander command has specific flags to control the installation profile and behavior.

Why Option B and D are Correct:

D. --init:
This is the most critical flag for this scenario. The --init flag instructs the installer to deploy Kommander in its minimal configuration. This means it will install only the essential components required for the management cluster to function, skipping additional platform services and applications that would consume more CPU, memory, and storage. This directly fulfills the requirement for a "minimal set of system resources."

B. --wait-timeout:
This flag is used to specify the maximum amount of time the installer will wait for the cluster to become ready. In a minimal test environment, provisioning might be slower due to less powerful underlying instances. Setting a --wait-timeout (e.g., --wait-timeout=1h) ensures the installer does not give up prematurely if the deployment takes longer than the default timeout on constrained resources.

Why the Other Options are Incorrect:

A. --request-timeout:
This flag typically sets the timeout for individual API requests made by the installer. It is a more granular, lower-level setting than --wait-timeout and is not the primary flag for controlling the overall installation profile or ensuring completion in a slow environment.

C. --yaml:
This flag is used to specify a custom YAML configuration file for the installation. While a custom YAML could be used to define a minimal resource footprint, the question asks for parameters to specify with the command set. The --init flag is the standard, built-in way to achieve a minimal installation without needing to create and manage a separate YAML file.

Reference / Key Concept:
This question tests the knowledge of the nkp install kommander command-line flags used to control the deployment profile and behavior.

Minimal Installation (--init):
Used for proof-of-concept, development, or test environments where resource consumption must be minimized. It installs the core Kommander components without the full suite of platform applications.

Wait Timeout (--wait-timeout):
Crucial for ensuring successful installation in environments where resource constraints might lead to longer deployment times. It prevents the CLI from exiting before the cluster is fully ready.

Therefore, to meet the requirement of a minimal, cost-effective test deployment, the engineer should use the combination of --init and an appropriate --wait-timeout.

When deploying an NKP cluster onto air-gapped, pre-provisioned servers, Konvoy Image Builder is utilized to prepare the servers to become NKP cluster nodes.
What does the konvoy-image upload command do as a part of this preparation process?


A. The command is used to create a konvoy userid on the servers, as well as upload artifacts to them such as the container runtime, the OS bundle, and Kubernetes components.


B. The command is used to upload OS hardening scripts to the server (must be client supplied).


C. The command uploads artifacts to the servers such as the container runtime, the OS bundle, and Kubernetes components.


D. The command uploads artifacts to the servers such as the container runtime, the OS bundle, and Kubernetes components, including optional OS hardening scripts (must be client supplied).





C.
  The command uploads artifacts to the servers such as the container runtime, the OS bundle, and Kubernetes components.

Explanation
The konvoy-image upload command is a specific step in the Konvoy Image Builder (KIB) workflow for provisioning bare metal or pre-provisioned servers in an air-gapped environment.

Why Option C is Correct:
This option accurately and concisely describes the core function of the konvoy-image upload command. After KIB has built the OS image bundle offline, this command is responsible for transferring (uploading) the necessary installation artifacts from the build host to the target servers over the network. These artifacts include:

The container runtime (e.g., containerd)

The OS bundle (including packages and dependencies)

Kubernetes components (kubeadm, kubelet, kubectl)

Why the Other Options are Incorrect:

A. The command is used to create a konvoy userid on the servers, as well as upload artifacts...
While creating a konvoy user is part of the overall node provisioning process, it is not the specific task of the upload command. This step is typically handled by the Ansible playbooks during a later phase, not by the upload command itself.

B. The command is used to upload OS hardening scripts to the server (must be client supplied).
This is too narrow and partially incorrect. While KIB can incorporate custom hardening scripts if they are provided and configured in the Ansible variables, this is not the primary or default purpose of the upload command. Its main job is to upload the core artifacts listed in option C.

D. The command uploads artifacts...
including optional OS hardening scripts (must be client supplied). This option is very close but adds an unnecessary and potentially misleading qualifier. The primary, guaranteed function of the command is to upload the core artifacts. The inclusion of "optional OS hardening scripts" is a specific use case, not the defining characteristic of the command. Option C provides the cleanest and most accurate description of its standard operation.

Reference / Key Concept
This question tests the understanding of the Konvoy Image Builder workflow for air-gapped, pre-provisioned infrastructure.

The typical sequence is:

konvoy-image build:
Creates the OS image bundle on a machine with internet access.

konvoy-image upload:
Transfers the built bundle and other artifacts to the target, air-gapped servers. (This is the command in question)

konvoy-image provision:
Uses Ansible to install the uploaded artifacts onto the servers, configuring them as Kubernetes nodes.

The upload command's role is purely to transfer the pre-built files to the target nodes, making them available for the subsequent provision step to execute.

In an effort to control cloud cost consumption, auto-scale is configured to meet demands as needed.
What is the behavior for when nodes are scaled down?


A. Node is changed to a status of Hibernate.


B. Node is CAPI deleted from its infrastructure provider, effectively removing it from its hypervisor.


C. Node is changed to a status of Power-Off for stand-by


D. Node is paused in Kubernetes and the infrastructure continues to consume the resources at the current level.





B.
  Node is CAPI deleted from its infrastructure provider, effectively removing it from its hypervisor.

Explanation
The Kubernetes Cluster Autoscaler, which is integrated with NKP, is designed to reduce costs by removing unnecessary infrastructure when it is no longer needed. Its scaling-down behavior is direct and focused on eliminating cloud resource consumption.

Why Option B is Correct:
When the Cluster Autoscaler determines a node is underutilized and its pods can be safely rescheduled elsewhere, it does not put the node in a standby or hibernation state. Instead, it:

Cordon and drain the node, evicting all pods.

Trigger the deletion of the node's underlying virtual machine instance via the Cluster API (CAPI) provider.

This results in the VM being permanently deleted from the infrastructure provider (e.g., AWS EC2, Nutanix AHV, vSphere).

This action directly stops the accrual of costs for that compute instance, which is the primary goal of cost control.

Why the Other Options are Incorrect:

A. Node is changed to a status of Hibernate.
Kubernetes and standard Cluster Autoscaler do not have a native "Hibernate" state for nodes. Hibernation is a specific feature of some cloud services for instances, but it is not the default scaling-down behavior of the autoscaler.

C. Node is changed to a status of Power-Off for stand-by.
A powered-off VM still incurs costs in most cloud environments (e.g., storage costs for the disk, and often a reduced compute cost). The autoscaler's purpose is to minimize costs, so it deletes the resource entirely rather than leaving it in a state that continues to incur charges.

D. Node is paused in Kubernetes and the infrastructure continues to consume the resources...
This is the opposite of cost control. If the infrastructure continues to run and consume resources, the company continues to pay for it. The autoscaler is specifically designed to avoid this scenario.

Reference / Key Concept:
This question tests the understanding of the Kubernetes Cluster Autoscaler's scaling-down mechanism and its direct impact on cloud costs.

Cost-Optimization Focus:
The autoscaler is not just about performance; it's a key cost-control tool. Its most effective way to save money is to completely remove unneeded compute resources.

CAPI Integration:
In NKP, cluster operations are managed by the Cluster API (CAPI). The autoscaler interacts with the CAPI custom resources, which in turn instruct the infrastructure provider (via the CAPI provider) to delete the VM.

Stateless Nodes:
The autoscaler operates under the assumption that worker nodes are stateless and replaceable. Any persistent data must be on network-attached storage (PersistentVolumes), not local to the node, to allow for safe node deletion.

A development Kubernetes cluster deployed with NKP is having performance issues. The Cloud Engineer commented that worker VMs are consuming a lot of CPU and RAM. The Platform Engineer took a look at the CPU and RAM statistics with Grafana and confirmed that the worker VMs are running out of CPU and memory. The Kubernetes cluster has 4 workers with 8 vCPUs and 32 GB RAM. What could the Platform Engineer do?


A. Call tech support to take a look at the infrastructure and investigate.


B. Ask developers to lower the number of application replicas.


C. Add more CPU and memory to workers with nkp scale --cpu 16 --memory 64 --cluster-name ${CLUSTER_NAME}


D. Add one more worker with nkp scale nodepools ${NODEPOOL_NAME} --replicas=5 --clustername=${ CLUSTER_NAME} -n ${CLUSTER_WORKSPACE}





D.
  Add one more worker with nkp scale nodepools ${NODEPOOL_NAME} --replicas=5 --clustername=${ CLUSTER_NAME} -n ${CLUSTER_WORKSPACE}

Explanation
The problem is a cluster-wide resource shortage. The worker VMs themselves are running out of CPU and memory, indicating that the total allocatable resources across the entire cluster are insufficient for the current workload.

Why Option D is Correct:
This command horizontally scales the node pool by increasing the number of worker nodes from 4 to 5. This adds the total capacity of an entire new node (8 vCPUs and 32 GB RAM) to the cluster. This provides immediate relief by giving the Kubernetes scheduler more physical resources to place pods, which is the standard and most direct solution for this type of resource exhaustion.

Why the Other Options are Incorrect:

A. Call tech support to take a look at the infrastructure and investigate.
While investigation is part of the process, the problem has already been diagnosed using Grafana: the cluster is simply out of resources. The engineer has the tools and authority to resolve this by scaling the cluster. Escalating to support is an unnecessary step when the solution is a standard operational procedure.

B. Ask developers to lower the number of application replicas.
This is a temporary and non-scalable solution that impacts application performance and availability. The goal of the platform is to provide resources for the applications to run, not to throttle the applications to fit an undersized platform. This approach does not align with cloud-native scalability principles.

C. Add more CPU and memory to workers with nkp scale --cpu 16 --memory 64 --cluster-name ${CLUSTER_NAME}.
This command would vertically scale the existing worker nodes. This is not a recommended or typically supported operation for NKP worker nodes. Vertical scaling often requires restarting or recreating the VMs, which is disruptive. Furthermore, the syntax is incorrect; the nkp scale command is used for scaling node pools horizontally (changing the replica count), not for resizing individual VMs.

Reference / Key Concept:
This question tests the understanding of scaling strategies in Kubernetes and the correct use of the NKP CLI.

Horizontal vs. Vertical Scaling:

Horizontal Scaling (Correct):
Adding more nodes to a pool. This is the preferred, non-disruptive method for increasing cluster capacity in Kubernetes.

Vertical Scaling (Incorrect):
Resizing existing nodes. This is often disruptive and not a standard nkp command for worker nodes.

NKP CLI for Scaling:
The correct command to scale a node pool is nkp scale nodepools, which adjusts the --replicas count. This triggers the Cluster API (CAPI) to provision a new node with the same specifications as the others in the pool.

The most efficient and platform-native way to resolve the performance issue is to horizontally scale the cluster by adding another worker node.

A Platform Engineer is running a Kubernetes cluster version 1.28.1 on AWS that needs to be upgraded to version 1.29.9. This cluster was deployed with Nutanix NKP. Which two actions should the engineer take to complete this requirement? (Choose two.)


A. Upgrade Workers with nkp update nodepool aws ${NODEPOOL_NAME} --clustername=${ CLUSTER_NAME} --kubernetes-version=v1.29.9


B. Upgrade Control Planes with nkp update controlplane aws --cluster-name=${CLUSTER_NAME} -- ami AMI_ID --kubernetes-version=v1.29.9


C. Upgrade Workers with nkp upgrade nodepool aws ${NODEPOOL_NAME} --clustername=${ CLUSTER_NAME} --kubernetes-version=v1.29.9


D. Upgrade the Cluster with nkp update cluster aws --cluster-name=${CLUSTER_NAME} --ami AMI_ID --kubernetes-version=v1.29.9





B.
  Upgrade Control Planes with nkp update controlplane aws --cluster-name=${CLUSTER_NAME} -- ami AMI_ID --kubernetes-version=v1.29.9

C.
  Upgrade Workers with nkp upgrade nodepool aws ${NODEPOOL_NAME} --clustername=${ CLUSTER_NAME} --kubernetes-version=v1.29.9

Explanation
In NKP, upgrading a cluster involves a two-step process: first upgrading the control plane nodes, and then upgrading the worker node pools. The commands for these two steps are different.

Why Option B is Correct:
The command to upgrade the control plane nodes in an NKP-provisioned cluster on AWS is nkp update controlplane aws. This command will perform a rolling update of the control plane nodes, replacing them with new instances that use the specified Kubernetes version (v1.29.9) and the compatible AMI.

Why Option C is Correct:
The command to upgrade a worker node pool in an NKP-provisioned cluster is nkp upgrade nodepool aws. This command will perform a rolling update of the worker nodes in the specified node pool, cordoning and draining each node before replacing it with a new node running the target Kubernetes version.

Why the Other Options are Incorrect:

A. Upgrade Workers with nkp update nodepool aws...:
This command uses the wrong verb. The correct command for worker node pools is upgrade nodepool, not update nodepool. The update command is reserved for the control plane.

D. Upgrade the Cluster with nkp update cluster aws...:
This is a distractor. There is no single nkp update cluster command that handles the entire upgrade process in one step. The upgrade must be performed in the two distinct phases (control plane first, then workers) using the specific commands for each.

Reference / Key Concept
This question tests the knowledge of the specific NKP CLI commands and the proper sequence for performing a Kubernetes version upgrade on a provisioned cluster.

Sequential Upgrade:
The standard and safe procedure is to always upgrade the control plane first, followed by the worker nodes. This ensures the control plane's API server and controllers are compatible with the kubelets on the workers.

Command Specificity:

Control Plane:
nkp update controlplane

Worker Node Pools:
nkp upgrade nodepool

AMI Requirement:
For IaaS providers like AWS, upgrading the Kubernetes version often requires specifying a new AMI that contains the necessary components (kubelet, container runtime) for the target version. This is why the --ami flag is needed for the control plane upgrade.

By using the two correct commands (B and C) in sequence, the engineer can safely upgrade the entire cluster from v1.28.1 to v1.29.9.


Page 2 out of 8 Pages
Previous