NCP-CN Practice Test Questions

95 Questions


Perform Day 2 Operations

A technology company has decided to migrate its infrastructure to NKP to improve the scalability and management of its applications. After a successful initial implementation, the operations team faces a new challenge of validating the HelmReleases to ensure that all applications are running correctly and avoid problems in production. Which command should the company execute to know the right status of their HelmReleases?


A. kubectl get namespaces


B. kubectl get helmreleases -n ${PROJECT_NAMESPACE}


C. kubectl edit helmreleases -n ${PROJECT_NAMESPACE}


D. kubectl apply -f fluent-bit-overrides.yaml





B.
  kubectl get helmreleases -n ${PROJECT_NAMESPACE}

Explanation
The question centers on the need to validate and check the status of HelmReleases after deployment. The key requirement is to obtain a read-out of the current state without making any changes.

Why Option B is Correct:
The kubectl get command is the primary command for listing and retrieving the status of Kubernetes resources. When applied to the helmreleases resource (which is a custom resource from the Flux CD Helm Controller), it will display a list of all HelmReleases in the specified namespace. The output typically includes columns like NAME, READY, STATUS, and MESSAGE, which directly tell the operations team if the deployments were successful, are in progress, or have failed. This is the most direct and appropriate command for validation.

Why the Other Options are Incorrect:

A. kubectl get namespaces:
This command lists all available namespaces but provides no information about the HelmReleases or application status within any specific namespace. It is too broad and does not address the core challenge.

C. kubectl edit helmreleases -n ${PROJECT_NAMESPACE}:
This command opens the live configuration of a HelmRelease for editing in a text editor. It is a mutating command and is used for making changes, not for validation. Using this to check status is inefficient and risks accidental modification.

D. kubectl apply -f fluent-bit-overrides.yaml:
This command is used to create or update resources based on a configuration file. It deploys or reconfigures a specific application (like Fluent Bit) but does not provide a status report on existing HelmReleases. It is an action command, not an inspection command.

Reference / Key Concept:
This question tests your understanding of fundamental kubectl commands and the GitOps workflow on the Nutanix Kubernetes Platform (NKP).

kubectl get:
The standard command for querying and retrieving the status of any Kubernetes resource.

HelmRelease:
A Custom Resource Definition (CRD) used by the Flux CD Helm Controller to manage the lifecycle of Helm chart deployments. Checking its status is the standard way to see if a Helm chart installed or upgraded successfully.

Namespace Scoping:
The use of -n ${PROJECT_NAMESPACE} is crucial, as it ensures the command only looks for HelmReleases within the specific project's namespace, which is a core tenet of multi-tenancy in NKP.

In summary, to validate the status of deployments, you use a get command, not a command that lists, edits, or applies configurations.

A Platform Engineer is deploying a new NKP cluster that has internet connectivity. Now, a Cloud Administrator and Security Administrator are discussing the security of communications between the NKP Kubernetes cluster and the container registry. The engineer proposes to have an on-prem private registry. What is the most significant reason that the engineer should create a private registry instead of configuring a secure connection between the NKP cluster and Github (SaaS)?


A. Private registry license is included with NKP.


B. NKP requires specific registry versions.


C. NKP cannot connect to public clouds.


D. Private registry provides security and privacy.





D.
  Private registry provides security and privacy.

Explanation
The core of this question revolves around the fundamental security and operational advantages of using a private, on-premises container registry versus relying on an external SaaS (Software-as-a-Service) registry, even with a secure connection.

Why Option D is Correct: A private, on-premises registry gives the organization full control over the security, availability, and privacy of its container images.

Security:
It allows for strict access control policies (e.g., integrated with on-prem LDAP/AD), vulnerability scanning as part of the internal CI/CD pipeline, and protection from supply chain attacks targeting public repositories.

Privacy:
Proprietary application code and container images never leave the organization's internal network. This prevents any potential exposure through misconfiguration of the SaaS platform and ensures compliance with data sovereignty regulations that may require data to reside on-premises.

While a secure connection (TLS) to GitHub's container registry protects data in transit, it does not address who else has access to the registry (multi-tenancy), the legal jurisdiction of the data at rest, or the registry's availability during an internet outage.

Why the Other Options are Incorrect:

A. Private registry license is included with NKP.
This is not a standard offering. NKP facilitates connecting to a registry but does not typically bundle a license for a private registry product like Harbor or JFrog Artifactory. The decision should be based on technical and security merits, not a presumed cost saving.

B. NKP requires specific registry versions.
This is false. NKP is designed to be agnostic and can work with any standard-compliant container registry (e.g., Docker Hub, Harbor, Google Container Registry, Amazon ECR, GitHub Container Registry). It does not mandate a specific version.

C. NKP cannot connect to public clouds.
This is factually incorrect. The question explicitly states the NKP cluster has internet connectivity. NKP clusters can absolutely be configured to pull images from public registries like GitHub Container Registry (ghcr.io), Docker Hub, and others. This is a common and supported practice.

Reference / Key Concept:
This question tests the understanding of software supply chain security and the role of a private registry in a secure enterprise Kubernetes deployment.

Control and Sovereignty:
The most significant reason for choosing an on-prem private registry is the complete control it offers over the entire lifecycle of the container images—who can push, who can pull, when they are scanned, and where the data is physically stored.

Defense-in-Depth:
Using a private registry is a key defense-in-depth strategy. It reduces the attack surface by eliminating dependence on an external service for a critical component (the application binaries themselves) and protects against outages or rate-limiting on public services.

Nutanix Best Practices:
While Nutanix provides the platform (NKP), it is a best practice in enterprise environments to manage your own private registry to ensure security, compliance, and operational reliability, aligning with the principle of "you build it, you run it," including your software artifacts.

A Platform Engineer manages an NKP v2.12.x environment and is using NKP Image Builder (NIB) to create a custom image. Which two distributions are available for use by the engineer for this task? (Choose two.)


A. Ubuntu


B. Fedora


C. Rocky Linux


D. CentOS





A.
  Ubuntu

C.
  Rocky Linux

Explanation
The Nutanix Kubernetes Platform (NKP) Image Builder is a tool used to create custom machine images for worker nodes. These images are pre-configured with the necessary components to join an NKP cluster. The available operating system distributions are limited to those that Nutanix has tested, certified, and provides pre-configured templates for within the NIB service.

Why Options A and C are Correct:

A. Ubuntu:
Ubuntu is a primary, first-class citizen and one of the most widely supported Linux distributions for Kubernetes and cloud-native workloads. It is a standard and consistently available option in NKP Image Builder.

C. Rocky Linux:
Rocky Linux (along with other RHEL-compatible derivatives like AlmaLinux) is a direct, binary-compatible successor to CentOS. As CentOS Stable shifted its focus, Nutanix and the broader enterprise community have adopted Rocky Linux as the recommended replacement for CentOS in the NKP ecosystem. It is the standard RHEL-compatible option.

Why the Other Options are Incorrect:

B. Fedora:
Fedora is a cutting-edge, community-driven distribution that serves as an upstream for Red Hat Enterprise Linux. Due to its rapid release cycle and shorter support lifespan, it is not typically supported or offered in enterprise platform tools like NKP Image Builder, which prioritize stability and long-term support.

D. CentOS:
While CentOS was historically a core supported distribution, its status changed with the shift to CentOS Stream. For NKP v2.12.x, the focus for RHEL-compatible images has moved to its stable alternatives, primarily Rocky Linux. CentOS (the classic stable version) is generally no longer available or recommended as a new choice in NIB.

Reference / Key Concept
This question tests knowledge of the supported operating systems for NKP worker nodes via the Image Builder service.

Enterprise-Grade Support:
NKP is an enterprise platform, and as such, it only integrates with Linux distributions that offer long-term stability and support (LTS). Both Ubuntu LTS and Rocky Linux fit this requirement.

Nutanix Documentation:
The official Nutanix documentation for NKP Image Builder explicitly lists the available base OS templates. For versions around NKP 2.12.x, this list prominently includes Ubuntu and Rocky Linux as the standard options, reflecting the industry-wide transition away from CentOS Stable.

In summary, the engineer should select Ubuntu for a Debian-based environment or Rocky Linux for a RHEL-based environment, as these are the current, stable, and supported distributions provided by NIB.

A Platform Engineer needs to do an air-gapped installation of NKP. This environment needs to run without Internet access and be fully operational, including updates. Docker has been installed, and the NKP bundle exists on a bastion host. What is the first command that the engineer must run to begin the process?


A. nkp push bundle --bundle


B. docker load -i konvoy-bootstrap-image-v2.12.0.tar


C. tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz


D. nkp create cluster nutanix





C.
  tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Explanation
An air-gapped installation requires all necessary files to be physically transported into the isolated environment. The NKP air-gapped bundle is a single, compressed tarball file that contains all the container images, binaries, and configuration files needed for the installation. Before any of these components can be used, they must be extracted from this archive.

Why Option C is Correct:
The tar -xzvf command is used to extract the contents of a .tar.gz file. This is the absolute first step. You must extract the bundle's contents to access the:

NKP CLI binary (nkp)

Bootstrap container image tarball (e.g., konvoy-bootstrap-image-*.tar)

Other necessary installation files and charts.
Without performing this step, none of the subsequent commands can be run because the required files do not exist on the filesystem.

Why the Other Options are Incorrect:

A. nkp push bundle --bundle:
This command is used to push the extracted container images to a private registry. However, you cannot run the nkp command until after the bundle has been extracted, as the nkp binary itself is inside the tarball.

B. docker load -i konvoy-bootstrap-image-v2.12.0.tar:
This command loads the bootstrap container image into the local Docker daemon. This is a critical step, but it can only be performed after the main bundle tarball has been extracted, as this specific .tar file is one of the contents inside the main bundle.

D. nkp create cluster nutanix:
This command is used to initiate the cluster creation process. This is one of the final steps in the procedure and requires that all previous steps (extracting the bundle, loading the bootstrap image, pushing images to a registry) have been completed successfully.

Reference / Key Concept
This question tests the understanding of the sequential procedure for an air-gapped NKP installation. The logical, unskippable order of operations is crucial.

Extract the Bundle: Unpack the delivered tarball to make its contents available. (This is the first command)

Load the Bootstrap Image: Use docker load to make the NKP bootstrap container available locally.

Initialize the NKP CLI: The nkp binary, now extracted, can be used.

Push Images to Registry: Use nkp push bundle to upload all container images from the extracted bundle to your private, air-gapped container registry.

Create Cluster: Finally, use nkp create cluster to deploy the cluster, which will now pull its images from the internal registry.

You cannot run a tool before you have extracted it from its packaging. Therefore, extracting the main bundle tarball is the mandatory first step.

A company has been modernizing on cloud-native platforms for the past few years and has been running
some small consumer support utilities on their production NKP cluster. After a thorough testing and QA cycle
with simulated workloads on a development cluster, the company is ready to bring their online retail
application into the fold. While they have sufficient system resources to scale the NKP cluster properly from a
performance standpoint, they also want to ensure they properly scale their monitoring stack’s resource
settings to retain a sufficient amount of data to see how overall system resource utilization trends for the NKP
cluster over several months’ time with the added workloads. Which NKP Platform Application component
should the company be most concerned with adjusting, and how should their Platform Engineer adjust it?


A. Adjust the number of replicas for the Fluent Bit deployment, as well as increase the amount of storage? available for use by the NKP cluster.


B. Adjust the number of replicas for the Prometheus deployment, as well as increase the amount of storage available for use by the NKP cluster.


C. Adjust the resource settings for Fluent Bit by increasing its container resource limits and memory settings, as well as its storage.


D. Adjust the resource settings for Prometheus by increasing its container resource limits and memory settings, as well as its storage.





D.
  Adjust the resource settings for Prometheus by increasing its container resource limits and memory settings, as well as its storage.

Explanation
The question focuses on scaling the monitoring stack to handle a larger workload and, most critically, to retain a sufficient amount of data... over several months' time. This directly points to the components responsible for metrics collection, storage, and long-term retention.

Why Option D is Correct:

Role of Prometheus:
Prometheus is the core time-series database and monitoring component in the NKP (and standard Kubernetes) stack. It is directly responsible for:

Scraping and Storing Metrics:
It pulls metrics from the Kubernetes API, nodes, pods, and applications.

Data Retention:
It stores these metrics on a persistent volume for querying and analysis.

Scaling Needs:
Adding a major new application like an online retail system will drastically increase the number of metrics collected (more pods, more services, more nodes). To handle this and retain data for "several months," you must:

Increase CPU/Memory Limits:
More metrics require more computational power for ingestion, compression, and querying.

Increase Storage:
The primary constraint for long-term data retention is disk space. Retaining months of high-resolution metrics from a larger cluster demands a significant increase in Prometheus' persistent storage capacity.

Why the Other Options are Incorrect:

A. Adjust the number of replicas for the Fluent Bit deployment, as well as increase the amount of storage available for use by the NKP cluster.
Fluent Bit is for log collection and forwarding, not metrics. Increasing its replicas or cluster-wide storage does not address the specific need to store and query historical metric data. The storage increase is also too vague; it must be targeted at the Prometheus component.

B. Adjust the number of replicas for the Prometheus deployment, as well as increase the amount of storage available for use by the NKP cluster.
While increasing Prometheus storage is correct, simply increasing the number of replicas is not the standard approach for vertical scaling. Prometheus is typically deployed as a single, stateful instance in this context. Horizontal scaling involves a more complex federation or Thanos architecture, which is not implied here. The correct approach is to scale the existing instance vertically (more CPU, RAM, and storage).

C. Adjust the resource settings for Fluent Bit by increasing its container resource limits and memory settings, as well as its storage.
This is incorrect for the same reason as option A. Fluent Bit handles logs. The question is explicitly about "system resource utilization trends," which are metrics, not logs. Fluent Bit does not store long-term data; it processes and forwards it.

Reference / Key Concept
This question tests the understanding of the NKP (or upstream Karbon) Platform Applications and their distinct roles:

Prometheus:
The metrics monitoring and time-series database system. Scaling for long-term data retention and increased load directly involves adjusting Prometheus' resource limits, requests, and most importantly, its storage size in its PersistentVolumeClaim (PVC).

Fluent Bit:
The log processing and forwarding agent. It is not used for storing historical data and does not track system resource utilization metrics.

When the requirement is to "retain a sufficient amount of [metrics] data," the component to adjust is always Prometheus, and the key adjustment is to its storage capacity and compute resources.

Which NKP-supported infrastructure will not receive CAPI components when an NKP cluster is deployed to it?


A. vSphere


B. GCP


C. Nutanix


D. AKS





D.
  AKS

Explanation
This question tests the understanding of the Cluster API (CAPI) model and how NKP uses it to manage Kubernetes clusters across different infrastructures

Why Option D is Correct:
AKS (Azure Kubernetes Service) is a managed Kubernetes service. When you deploy to AKS, you are not provisioning the underlying virtual machines, networks, or Kubernetes control plane nodes yourself. Instead, you are asking Azure to provide a fully managed, pre-configured Kubernetes cluster.

NKP uses CAPI to declaratively manage the entire lifecycle of a cluster, including creating the VMs, installing Kubernetes, and managing the control plane. Since AKS abstracts away the underlying infrastructure and control plane, there is no need for NKP to install CAPI components (like the cluster-api controllers or provider-specific controllers such as azure)*. The management is done through the Azure API, not through CAPI custom resources acting on IaaS.

Why the Other Options are Incorrect:

A. vSphere, B. GCP, and C. Nutanix are all infrastructure platforms where NKP uses CAPI to provision and manage the cluster.
For vSphere and Nutanix (AHV), NKP uses the cluster-api-provider-nutanix to create VMs and deploy Kubernetes onto them.

For GCP, NKP uses the cluster-api-provider-gcp to create Google Compute Engine instances and deploy Kubernetes.

In all these cases, the NKP management cluster installs and uses the respective CAPI provider components to manage the workload clusters. The key differentiator is that these are IaaS platforms where you have control over the VMs, unlike a managed service like AKS.

Reference / Key Concept

Cluster API (CAPI):
A Kubernetes sub-project that provides declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. It uses a management cluster to manage the lifecycle of workload clusters.

NKP as a Management Cluster:
NKP itself acts as a CAPI-based management cluster. When you deploy a new workload cluster to an IaaS provider (vSphere, Nutanix, GCP, AWS), it installs the corresponding CAPI infrastructure provider into the management cluster to handle that deployment.

Managed Services vs. IaaS:
Managed Kubernetes services like AKS, EKS, and GKE are a higher level of abstraction. The cloud provider manages the control plane and often the worker nodes. You cannot and do not need to install CAPI components to manage them; you interact with the cloud provider's own API and control plane.

In summary, CAPI is for managing clusters on infrastructure you control (IaaS). It is not used for clusters that are already managed by a cloud provider.

After loading the NKP bundles to a private registry in an air-gapped environment, a Platform Engineer now needs the Konvoy bootstrap image to create the bootstrap cluster. The Konvoy image has not been loaded into the registry. Which is the most viable command to load the Konvoy bootstrap image on the bastion host?


A. docker load -i konvoy-bootstrap-image-.tar


B. docker image tag konvoy-bootstrap-image-.tar version docker.io/konvoy-bootstrap version


C. nkp push bundle --bundle konvoy-bootstrap-image-.tar --to-registry=


D. nkp load image -f konvoy-bootstrap-image-.tar --to-registry=





A.
  docker load -i konvoy-bootstrap-image-.tar

Explanation
The question specifies a critical detail:the engineer needs the Konvoy bootstrap image to create the bootstrap cluster on the bastion host, and the image has not been loaded into the registry. The bootstrap cluster is a temporary, local Docker-based cluster used to orchestrate the installation of the final, permanent NKP cluster.

Why Option A is Correct:
The docker load command is used to load a container image from a tarball directly into the local Docker daemon's image store. Since the next step is to create a local bootstrap cluster on this same bastion host, the image must be available to the local Docker engine. This command accomplishes that exact task efficiently and is the standard, required step before running nkp create cluster.

Why the Other Options are Incorrect:

B. docker image tag konvoy-bootstrap-image-.tar version docker.io/konvoy-bootstrap version:
This command is syntactically incorrect and conceptually flawed. The docker image tag command operates on an image that is already loaded into Docker, not on a tarball file. You must use docker load first.

C. nkp push bundle --bundle konvoy-bootstrap-image-.tar --to-registry=:
This command is used to push images to a private registry. While this is a necessary step for the container images needed by the final NKP cluster, the bootstrap image is specifically required by the local Docker daemon on the bastion host to initiate the process. Pushing it to a remote registry does not make it available locally for the bootstrap cluster.

D. nkp load image -f konvoy-bootstrap-image-.tar --to-registry=:
This command is not a standard nkp command. The nkp push bundle command is used for pushing a collection of images, and there is no separate nkp load image command for loading a single image into a local Docker daemon. This is a distractor.

Reference / Key Concept
This question tests the understanding of the two-stage process for an air-gapped NKP installation and the distinct purposes of the local Docker daemon versus the private container registry.

Bootstrap Cluster (Local):
A temporary, single-node cluster run locally on the bastion host using a kind (Kubernetes in Docker) or k3d cluster. This requires the konvoy-bootstrap image to be present in the local Docker daemon. The command to achieve this is docker load.

Permanent NKP Cluster (Remote):
The final, production-ready cluster that runs on your infrastructure (e.g., Nutanix AHV, vSphere). This cluster pulls all its application images (like Prometheus, Fluent Bit, etc.) from the private registry. The command to populate that registry is nkp push bundle.

The engineer is currently on step 1, preparing the local bastion host. Therefore, the only viable command is docker load.

A Platform Engineer is deploying an NKP workload cluster using the nkp create cluster vsphere command. The cluster will be utilized by the company’s code-green team and the engineer has already created a codegreen NKP workspace on the NKP management cluster.
After issuing the deploy command, the engineer monitored the build using the nkp describe cluster command and confirmed it completed successfully. However, a few hours later, after logging into the NKP UI, the engineer checked the code-green NKP workspace and saw that the NKP workload cluster was not there.
What is the likely reason the NKP workload cluster is not in the code-green NKP workspace?


A. The vSphere cluster cannot be displayed in the NKP UI unless its Kubernetes version is within ‘N - 1’ versions of the NKP management cluster’s Kubernetes version.


B. The vSphere service account credentials had expired prior to the engineer’s attempt to view the cluster in the NKP UI. Once the credentials are refreshed, the vSphere cluster will reappear in the NKP workspace.


C. The engineer did not supply the --namespace code-green parameter as part of the nkp create cluster vsphere command, therefore it was created in the default workspace and needs to be manually attached.


D. NKP vSphere clusters cannot be assigned NKP workspaces and instead are assigned the default NKP workspace. The cluster can be viewed from this workspace instead





C.
  The engineer did not supply the --namespace code-green parameter as part of the nkp create cluster vsphere command, therefore it was created in the default workspace and needs to be manually attached.

Explanation
In NKP, a workspace is a Kubernetes namespace on the management cluster that provides a logical boundary and access control for a set of resources, including workload clusters. When you create a workload cluster, you must explicitly specify which workspace (namespace) it belongs to.

Why Option C is Correct:
The scenario states that a code-green workspace was created, but the engineer used the generic nkp create cluster vsphere command. If the --namespace or -n flag is omitted, the NKP CLI will default to deploying the cluster in the default workspace. This explains why the cluster deployed successfully (as verified by nkp describe cluster) but is not visible in the intended code-green workspace in the UI. The cluster exists, but it's in the wrong logical container.

Why the Other Options are Incorrect:

A. The vSphere cluster cannot be displayed in the NKP UI unless its Kubernetes version is within ‘N - 1’ versions...
This is not a standard restriction for UI visibility. NKP can manage workload clusters with various supported Kubernetes versions, and the version disparity would more likely cause functional issues rather than simply hiding the cluster from the UI.

B. The vSphere service account credentials had expired...
Expired credentials would have prevented the cluster from being built successfully in the first place. The nkp describe cluster command confirmed the build was successful, meaning the credentials were valid at the time of deployment. Furthermore, credential expiration after the fact would typically not cause a cluster to vanish from the UI; it would more likely show it in an error state.

D. NKP vSphere clusters cannot be assigned NKP workspaces...
This is factually false. NKP workload clusters, regardless of the underlying infrastructure provider (vSphere, Nutanix, AWS, etc.), can and should be assigned to specific workspaces for proper multi-tenancy and organization. They are not forced into the default workspace.

Reference / Key Concept
This question tests the understanding of NKP Workspaces and Namespaces and the imperative use of the NKP CLI.

Workspace = Namespace:
In the context of the NKP management cluster, a "workspace" is implemented as a Kubernetes namespace. All NKP custom resources (like Clusters) for a given project must be created within that namespace.

Imperative CLI Requires Explicit Targeting:
When using imperative commands like nkp create cluster, you must explicitly tell the system where to place the resource using the --namespace flag. There is no automatic assignment based on the user's context in the UI.

Solution:
The engineer has two main options to resolve this:

Recreate the Cluster:
The most straightforward method is to delete the cluster from the default workspace and redeploy it with the correct --namespace code-green flag.

Move the Cluster (if supported):
In some cases, it might be possible to move the cluster's resources to the correct namespace, but this is a more advanced and less common operation. Recreation is typically the recommended path.

In summary, the most common reason for a successfully created resource not appearing in the expected UI location is that it was created in the wrong namespace due to a missing --namespace flag in the CLI command.

What is a prerequisite for upgrading an NKP license to Ultimate?


A. Size the Sidecar containers appropriately to support the installation of default platform services.


B. Size the ETCD nodes appropriately to support the installation of default platform services.


C. Size the Control Plane nodes appropriately to support the installation of default platform services.


D. Size the Worker nodes appropriately to support the installation of default platform services.





C.
  Size the Control Plane nodes appropriately to support the installation of default platform services.

Explanation
Upgrading an NKP license to the Ultimate tier unlocks advanced platform services (formerly known as Karbon Platform Services or KPS), such as service mesh, security scanning, and more. These services are deployed as part of the management stack and run on the control plane nodes of the NKP management cluster.

Why Option C is Correct:
The NKP Ultimate license enables a suite of additional, resource-intensive platform services. These services (e.g., Istio, Harbor, Grafana) are deployed as pods on the control plane nodes of the management cluster. If these nodes are not sized with sufficient CPU and memory to host these new workloads, the installation will fail or the cluster will become unstable. Therefore, ensuring the control plane nodes are appropriately sized is a critical prerequisite.

Why the Other Options are Incorrect:

A. Size the Sidecar containers appropriately...
"Sidecar containers" is a generic Kubernetes term for auxiliary containers in a pod. It is not a node-level resource category that needs pre-sizing for a license upgrade. This is a distractor.

B. Size the ETCD nodes appropriately...
In NKP and most Kubernetes distributions, the etcd service runs as a static pod on the control plane nodes. There are no dedicated "ETCD nodes." Sizing for etcd is encompassed within the overall sizing of the control plane nodes.

D. Size the Worker nodes appropriately...
While worker nodes run application workloads, the platform services enabled by the Ultimate license are specifically installed on the management cluster's control plane. The worker nodes in the management cluster are not the primary target for this prerequisite. The focus is correctly on the control plane, which hosts the core management services.

Reference / Key Concept
This question tests the understanding of the NKP licensing tiers and the architectural impact of enabling advanced features.

NKP Licensing Tiers:
NKP offers different tiers (e.g., Pro, Ultimate) that unlock different sets of features. The Ultimate tier includes a comprehensive set of platform services.

Control Plane Responsibility:
The control plane nodes in the NKP management cluster are responsible for hosting the core Kubernetes components (API Server, Scheduler, Controller Manager) as well as the NKP management stack and all enabled platform applications. Adding significant new services to this stack requires the underlying VMs (the control plane nodes) to have the necessary resources.

Prerequisite Checking:
The NKP documentation and pre-upgrade checks will explicitly require that the control plane nodes meet minimum CPU and memory requirements to support the Ultimate package deployment. Failure to meet these prerequisites will block the upgrade.

In summary, the key infrastructure prerequisite for a license upgrade that adds management services is ensuring the control plane nodes have the capacity to run them.

In a telecom company, two teams were working on the development of two different applications:
ApplicationA
ApplicationBApplicationA’s development team was excited about the release of their new functionality. However, upon deploying their application, they noticed that performance was slow. After investigating, they discovered that the ApplicationB team was consuming the majority of the cluster’s resources, affecting all other teams. How can this problem be mitigated?


A. Implementing Quotas and Limit Ranges


B. Setting up Network Policies


C. Configuring RBAC


D. Implementing Continuous Deployment (CD)





A.
  Implementing Quotas and Limit Ranges

Explanation
The core problem described is a "noisy neighbor" issue within a shared Kubernetes cluster. One team (ApplicationB) is consuming excessive resources (CPU and Memory), which starves other applications (like ApplicationA) of the resources they need to perform correctly.

Why Option A is Correct:
Kubernetes provides two primary mechanisms to control resource consumption at the namespace level, which is the standard way to isolate teams or projects:

ResourceQuotas:
A ResourceQuota defines aggregate resource limits for a whole namespace. It can limit the total amount of CPU, memory, and storage that all pods within a namespace (e.g., the applicationb team's namespace) can consume. This would prevent one team from monopolizing the cluster's resources.

LimitRanges:
A LimitRange sets default and default request values for CPU and memory for pods within a namespace. If the ApplicationB team deploys pods without specifying resource requests and limits, a LimitRange can automatically apply them, ensuring no pod can run without defined constraints.

Together, Quotas and Limit Ranges are the direct and standard Kubernetes method to mitigate resource-based interference between tenants in a shared cluster.

Why the Other Options are Incorrect:

B. Setting up Network Policies:
Network Policies control the flow of network traffic between pods (e.g., which services can talk to each other). They are a security tool for network isolation and have no impact on managing CPU or memory consumption.

C. Configuring RBAC:
Role-Based Access Control (RBAC) governs permissions—what users and service accounts are allowed to do (e.g., create pods, view secrets). It is crucial for security but does not limit the amount of computational resources a pod can use.

D. Implementing Continuous Deployment (CD):
A CD pipeline automates the process of deploying software. While best practices like resource limits can be enforced through a CD pipeline, the CD process itself is not the mechanism that mitigates resource contention. The mitigation is achieved by the Kubernetes resource management objects (Quotas, Limits) that the CD pipeline would apply.

Reference / Key Concept
This question tests the understanding of Kubernetes multi-tenancy and resource management.

Multi-tenancy:
The practice of running multiple applications or teams (tenants) on a single shared cluster. The primary challenge is providing isolation, especially for resources.

Resource Management Tools:

ResourceQuotas:
Protect the cluster from a single tenant. (Answer to "How do we stop one team from using everything?")

LimitRanges:
Protect a tenant from itself and enforce good hygiene. (Answer to "How do we ensure every pod has limits?")

Requests and Limits (on individual Pods):
The fundamental building blocks that tell the Kubernetes scheduler and kubelet how to manage a pod's resources.

In a telecom company or any organization with multiple teams sharing an NKP cluster, implementing ResourceQuotas per namespace/workspace is the essential step to ensure fair resource allocation and prevent the scenario described.

A Platform Engineer has been tasked with building a custom image for the deployment of NKP management and worker nodes. The engineer needs to ensure that the proper package versions are used when creating these images. The security team has only authorized version 1.30.5 of Kubernetes and version 1.7.22 of containerd. Where should the engineer go to verify that this is the version being used when building the custom image?


A. config/pulumi/vars/pulumi.kib.config


B. terraform/vars/default/terraform.tfvars


C. ansible/group_vars/all/defaults.yaml


D. The custom image's .env file





C.
  ansible/group_vars/all/defaults.yaml

Explanation
The Nutanix Kubernetes Platform (NKP) Image Builder uses Ansible as the underlying configuration management tool to define the state of the operating system image, including which software packages are installed.

Why Option C is Correct:
Within the NKP Image Builder project structure, the Ansible variables that control the versions of core components like Kubernetes and container runtimes are defined in the ansible/group_vars/all/defaults.yaml file. This is the central location where you would set variables like:

kubernetes_version: 1.30.5

containerd_version: 1.7.22

By checking and modifying this file, the engineer can explicitly lock in the versions mandated by the security team, ensuring the custom image is built with the correct, authorized software.

Why the Other Options are Incorrect:

A. config/pulumi/vars/pulumi.kib.config:
Pulumi is used in some Nutanix automation contexts for infrastructure provisioning, but it is not the primary tool for defining package versions within the OS image itself in NKP Image Builder. The core image configuration is handled by Ansible.

B. terraform/vars/default/terraform.tfvars:
Terraform is an infrastructure-as-code tool used to provision cloud resources (VMs, networks, etc.). It is not used to define the software packages installed on a machine image. Its variables file would not contain Kubernetes or containerd version numbers.

D. The custom image's .env file:
While some applications use .env files for configuration, the NKP Image Builder project does not use a root-level .env file to define its core build parameters. The canonical and documented location for these variables is within the Ansible group_vars.

Reference / Key Concept
This question tests the understanding of the NKP Image Builder (NIB) architecture and its use of Ansible.

Ansible for Image State Management:
NKP Image Builder leverages Ansible playbooks and roles to install and configure all necessary software on a base OS image. The versions of this software are controlled by Ansible variables.

Centralized Variable Location:
The ansible/group_vars/all/ directory is where variables that apply to all hosts (in this case, all nodes in the image) are defined. The defaults.yaml file within this directory is the standard place to set these key-value pairs.

Compliance and Governance:
For organizations with strict security policies, the ability to pin software versions in this file is critical for maintaining a consistent, secure, and compliant software bill of materials (SBOM) for their Kubernetes nodes.

Therefore, to verify and control the versions of Kubernetes and containerd, the engineer must inspect and configure the ansible/group_vars/all/defaults.yaml file.

A Platform Engineer is attaching existing Kubernetes clusters to NKP, but a particular Kubernetes Amazon EKS cluster is getting errors with application deployments. These errors are related to persistent volumes. What could be the issue, and what can the engineer do?


A. The storage appliance is having issues. The storage engineer should be contacted to take a look.


B. There is no compatible storage to be attached to the EKS cluster. Ask for compatible storage.


C. There is no default StorageClass. Storage classes should be reviewed, and only one should be set as default.


D. There could be a misconfiguration in the ConfigMap. It should be adjusted to NKP requirements.





C.
  There is no default StorageClass. Storage classes should be reviewed, and only one should be set as default.

Explanation
The problem is specifically with persistent volume (PV) provisioning in an Amazon EKS cluster that is being attached to NKP. In Kubernetes, when a PersistentVolumeClaim (PVC) is created without specifying a storageClassName, the cluster's default StorageClass is used to dynamically provision the volume.

Why Option C is Correct:
A common issue, especially with attached clusters like EKS, is the absence of a defined default StorageClass. If no StorageClass is marked as default, any PVC that does not explicitly name a StorageClass will fail to be provisioned, leading to the described errors. The solution is for the engineer to:

Review StorageClasses:
Run kubectl get storageclass to see the available classes.

Check for a Default:
Look for a StorageClass with the (default) annotation.

Set a Default:
If none exists, set one using kubectl patch storageclass -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'. EKS typically provides a gp2 or gp3 StorageClass that can be set as the default.

Why the Other Options are Incorrect:

A. The storage appliance is having issues...
This is too vague and unlikely for an Amazon EKS cluster. The "storage appliance" in EKS is the managed AWS cloud storage service (like EBS), which is highly reliable. The problem is almost certainly a configuration issue within the Kubernetes cluster itself, not a widespread outage of the underlying AWS service.

B. There is no compatible storage to be attached...
This is incorrect. EKS clusters have native access to AWS storage solutions like Elastic Block Store (EBS) through the EBS CSI driver. The compatible storage exists; the issue is the Kubernetes mechanism for automatically provisioning it (the default StorageClass) is not configured.

D. There could be a misconfiguration in the ConfigMap...
While a misconfiguration in a critical component (like the CSI driver's ConfigMap) could cause issues, the most common and fundamental reason for generic PV deployment failures on a newly attached cluster, especially EKS, is the absence of a default StorageClass. This is the first thing an engineer should check.

Reference / Key Concept
This question tests the understanding of dynamic persistent volume provisioning in Kubernetes and a common configuration pitfall when attaching external clusters.

StorageClass:
Defines a "class" of storage (e.g., "fast SSD," "slow HDD") and the provisioner to use.

Default StorageClass:
A crucial convenience feature. When a PVC is created without a storageClassName, the cluster's default StorageClass is used. If it doesn't exist, the PVC will remain unbound indefinitely.

EKS Specifics:
When creating an EKS cluster, the Amazon EBS CSI driver is often installed as an add-on, which provides the gp2 or gp3 StorageClass. However, it may not be set as the default, or the default might have been removed. This is a standard troubleshooting step for PV issues in EKS.

Therefore, the most likely cause and first action for the engineer is to verify and configure a default StorageClass.


Page 1 out of 8 Pages