Topic 4: Mix Questions Set
Your company is concerned that when developers introduce open source libraries, it creates licensing compliance issues. You need to add an automated process to the build pipeline to detect when common open source libraries are added to the code base. What should you use?
A. Microsoft Visual SourceSafe
B. PDM
C. WhiteSource
D. OWASP ZAP
Explanation:
This question addresses software composition analysis (SCA), a security and compliance practice for identifying open-source components and their licenses in a codebase. The requirement is to automate this detection within a CI/CD pipeline. Among the options, only one is a specialized, modern SCA tool designed to integrate into a build pipeline and scan for open-source libraries and licenses.
Correct Option:
C. WhiteSource
WhiteSource (now Mend) is a dedicated Software Composition Analysis (SCA) tool.
It automatically scans project dependencies, identifies all open-source components, and checks them against vulnerability databases and license compliance policies.
It is designed to integrate seamlessly into build pipelines (like Azure Pipelines) to provide fast feedback and block builds if problematic licenses or vulnerabilities are found.
Incorrect Option:
A. Microsoft Visual SourceSafe:
This is an obsolete, centralized source code control system. It has no capability for scanning or analyzing library licenses and is irrelevant in a modern DevOps pipeline context.
B. PDM (Product Data Management):
PDM systems are used in manufacturing and engineering to manage technical data, bills of materials, and product lifecycle data. They are not used for scanning source code for open-source licenses.
D. OWASP ZAP (Zed Attack Proxy):
This is a dynamic application security testing (DAST) tool used to find runtime security vulnerabilities in a running web application. It does not perform software composition analysis or license detection.
Reference:
Microsoft Learn - "Manage application configuration and secrets" (SCA tools are mentioned in the security context). The key concept is integrating Software Composition Analysis into the CI/CD pipeline.
Your company has a hybrid cloud between Azure and Azure Stack. The company uses Azure DevOps for its CI/CD pipelines. Some applications are built by using Erlang and Hack. You need to ensure that Erlang and Hack are supported as part of the build strategy across the hybrid cloud. The solution must minimize management overhead. What should you use to execute the build pipeline?
A. AzureDevOps self-hosted agents on Azure DevTest Labs virtual machines.
B. AzureDevOps self-hosted agents on virtual machine that run on Azure Stack
C. AzureDevOps self-hosted agents on Hyper-V virtual machines
D. a Microsoft-hosted agent
Explanation:
This scenario involves a hybrid Azure/Azure Stack environment and the need to build applications using less common languages (Erlang, Hack). Microsoft-hosted agents may not have these language runtimes pre-installed. While self-hosted agents provide control, the requirement to minimize management overhead across the hybrid cloud points to using a managed service where possible. The correct solution provides control for custom tooling while leveraging cloud automation for maintenance.
Correct Option:
B. Azure DevOps self-hosted agents on virtual machines that run on Azure Stack
This is the most targeted and efficient solution. You need agents accessible to resources on Azure Stack. Installing custom self-hosted agents on Azure Stack VMs allows you to install and manage Erlang, Hack, and any other required tooling directly.
It minimizes overhead because Azure Stack VMs can be managed with standard IaC/automation tools, and the agents are dedicated to your hybrid environment's specific needs, avoiding the complexity of managing on-premises Hyper-V infrastructure.
Incorrect Option:
A. Azure DevOps self-hosted agents on Azure DevTest Labs virtual machines:
While DevTest Labs can manage VMs, these would reside in Azure public cloud, not on Azure Stack. This may introduce network latency or policy issues when building/deploying to Azure Stack workloads and doesn't align as closely with the hybrid topology.
C. Azure DevOps self-hosted agents on Hyper-V virtual machines:
This implies managing traditional on-premises infrastructure, which increases management overhead for hardware, Hyper-V, and OS maintenance compared to using the Azure Stack managed fabric.
D. a Microsoft-hosted agent:
Microsoft-hosted agents offer minimal management overhead but do not allow installation of custom software like Erlang or Hack runtimes. They also cannot directly access resources within a private Azure Stack deployment, making them unsuitable for this hybrid build requirement.
Reference:
Microsoft Learn - "Deploy an Azure DevOps agent on Azure Stack Hub". The documentation explicitly covers using self-hosted agents with Azure Stack to build and deploy applications in a hybrid environment, providing the necessary control for custom toolchains.
You have an Azure DevOps project that contains a build pipeline. The build pipeline uses approximately 50 open source libraries. You need to ensure that all the open source libraries comply with your company’s licensing standards. Which service should you use?
A. Ansible
B. Maven
C. WhiteSource Bolt
D. Helm
Explanation:
This question is about enforcing open-source license compliance as part of a CI/CD pipeline. It specifically requires a service that can automatically scan and validate the licenses of numerous dependencies. The solution must integrate with Azure DevOps to provide gatekeeping and reporting. Among the options, only one is a dedicated service for Software Composition Analysis (SCA) and license compliance.
Correct Option:
C. WhiteSource Bolt
WhiteSource Bolt is a free Software Composition Analysis (SCA) service directly integrated into Azure DevOps.
It automatically scans builds for all open-source components, detects their licenses, and compares them against customizable policies.
It provides immediate feedback within the pipeline, failing builds if non-compliant licenses are found, thus ensuring the company's licensing standards are met.
Incorrect Option:
A. Ansible:
This is an infrastructure automation and configuration management tool. It is used for provisioning and managing servers, not for analyzing software dependencies or their licenses.
B. Maven:
This is a build automation and dependency management tool primarily for Java projects. While it manages library downloads, it does not inherently validate or enforce licensing compliance standards across all open-source components.
D. Helm:
This is a package manager for Kubernetes, used to define, install, and upgrade applications. It manages containerized application deployments, not the license compliance of application dependencies or libraries.
Reference:
Microsoft Learn - "WhiteSource Bolt" extension documentation in the Azure DevOps Marketplace. It describes the service as providing "continuous open source software security and compliance management" by scanning builds for vulnerabilities and license risks.
Your company creates a new Azure DevOps team.
D18912E1457D5D1DDCBD40AB3BF70D5D
You plan to use Azure DevOps for sprint planning.
You need to visualize the flow of your work by using an agile methodology.
Which Azure DevOps component should you use?
A. Kanban boards
B. sprint planning
C. delivery plans
D. portfolio backlogs
Explanation:
The requirement is to visualize the flow of work using an agile methodology. In Azure DevOps, the primary tool for this purpose is the Kanban board. While other options like sprint planning are part of the process, they are not the core visualization tool for tracking work items as they move through defined states (e.g., New, Active, Done). Kanban boards provide a real-time, column-based view of this workflow.
Correct Option:
A. Kanban boards
Kanban boards are the fundamental Azure DevOps component designed to visualize workflow in an agile context.
They display work items (User Stories, Tasks, Bugs) as cards moving across columns that represent workflow states (e.g., To Do, Doing, Done).
This provides at-a-glance insight into progress, bottlenecks, and the continuous flow of work, aligning directly with Kanban agile practices.
Incorrect Option:
B. Sprint planning:
This is an activity or meeting, not a visualization component. While performed using Azure Boards, it involves selecting backlog items for a sprint, not continuously visualizing their flow.
C. Delivery plans:
This is a component used to visualize a calendar view of scheduled work across multiple teams and sprints. It is used for longer-term roadmap planning, not for tracking the day-to-day flow of individual work items.
D. Portfolio backlogs:
These are hierarchical backlogs used to group and manage large initiatives (Epics, Features) that roll up into User Stories. They help organize and prioritize work but do not visualize the flow of items through development states.
Reference:
Microsoft Learn - "About Boards and Kanban". It states, "Use your Kanban board to update status, reassign work, and adjust the flow of work... It provides a visual interactive space for you to review and update your work."
You have a project in Azure DevOps named Project1 that contains two environments
named environment1 and envkonment2.
When a new version of Project1 is released, the latest version is deployed to environment2,
and the previous version is redeployed to environments.
You need to distribute users across the environments. The solution must meet the following
requirements:
A. web app deployment slots
B. Azure Traffic Manager
C. VIP swapping
D. Azure Load Balancer
Explanation:
This question describes a controlled feature rollout or canary release scenario. You have two environments (representing different versions) and need to split user traffic between them, gradually shifting more users to the new version. This requires a solution that can perform routing based on percentages and is tightly integrated with application deployment. While some options manage traffic, only one natively integrates with app deployment for this purpose.
Correct Option:
A. Web app deployment slots
Azure Web App deployment slots (like Staging and Production) are designed for this exact scenario.
You can deploy a new version to a staging slot (environment2 in the question). Using the "Traffic Routing" feature, you can specify a percentage of users to be routed to the new slot.
You can then gradually increase this percentage, performing a controlled rollout, before finally swapping the slots if required.
Incorrect Option:
B. Azure Traffic Manager:
This is a DNS-level traffic load balancer. It can distribute users across endpoints but is not designed for gradual, percentage-based rollouts of application versions from within the same app service. It works at the DNS/region level, not at the application deployment slot level.
C. VIP swapping:
This typically refers to swapping Virtual IP addresses, often in the context of swapping deployment slots. However, it is the action (a swap), not the mechanism for gradual user distribution. A full VIP swap instantly moves all traffic, not a subset that increases gradually.
D. Azure Load Balancer:
This is a Layer-4 (transport layer) load balancer that distributes traffic based on rules and health probes. It does not have built-in capabilities for percentage-based traffic splitting between different application versions for canary releases.
Reference:
Microsoft Learn - "Set up staging environments in Azure App Service". It details how to use deployment slots for testing in production and route traffic to different slots, enabling canary release patterns and A/B testing.
You have a GitHub repository that is integrated with Azure Boards Azure Boards has a work item that has the number 715. You need to ensure that when you commit source code in GitHub, the work item is updated automatically. What should you include in the commit comments?
A. @714
B. =715
C. the URL of the work item
D. AB#715
Explanation:
This question is about integrating GitHub commits with Azure Boards work items. For the systems to link automatically, Azure DevOps must parse the commit message for a specific syntax that identifies a work item. Using the correct keyword and ID in the commit comment triggers the system to create a link and update the work item's development links section.
Correct Option:
B. =715
The correct syntax to link a GitHub commit to an Azure Boards work item is the "=" sign followed by the work item number.
Adding =715 in the commit message automatically creates a link in the Azure Boards work item under "Development" or "Commits".
This syntax is part of the cross-service integration configuration between GitHub and Azure Boards, allowing traceability from code to work items.
Incorrect Option:
A. @714:
The @ symbol is commonly used in GitHub to mention users or teams, not to reference Azure Boards work items. This syntax will not trigger a link in Azure DevOps.
C. the URL of the work item:
While pasting the full URL might manually create a hyperlink in the commit text, it does not trigger the automatic integration to update and link the work item in Azure Boards. The system looks for the specific =ID token.
D. AB#715:
The # symbol is used to link to issues or pull requests within GitHub. The AB# prefix is used when committing code in an Azure Repos Git repository to link to Azure Boards. For a GitHub repository integrated with Azure Boards, the correct prefix is simply =.
Reference:
Microsoft Learn - "Link GitHub commits and pull requests to Azure Boards work items". The documentation states: "To link to an Azure Boards work item, use the =ID notation, where ID is the work item ID. For example, =723 links to work item 723."
You have an Azure Resource Manager template that deploys a multi-tier application. You need to prevent the user who performs the deployment from viewing the account credentials and connection strings used by the application. What should you use?
A. an Azure Resource Manager parameter file
B. an Azure Storage table
C. an Appsettings.json files
D. Azure Key Vault
E. a Web.config file
Explanation:
This question focuses on securely managing sensitive configuration data, such as credentials and connection strings, during an infrastructure deployment. The core requirement is to prevent the user performing the deployment from viewing the secrets. This requires a solution where secrets are stored in a centralized, access-controlled service that can be referenced, but not retrieved, by the deployment process.
Correct Option:
D. Azure Key Vault
Azure Key Vault is the dedicated Azure service for managing secrets, keys, and certificates securely.
The ARM template can reference a secret stored in Key Vault (e.g., "reference": {"keyVault": {...}}). During deployment, the resource provider retrieves the secret directly from Key Vault, and the secret value is never exposed in the template, parameters, or deployment logs.
Access is controlled via Azure RBAC and Key Vault access policies, ensuring the deploying user cannot view the secret if not explicitly granted permission.
Incorrect Option:
A. an Azure Resource Manager parameter file:
Parameter files (.json) are commonly used to pass values to a template. If secrets are stored directly in a parameter file, they are visible in plaintext, which does not meet the security requirement.
B. an Azure Storage table:
While storage tables can store data, they are not designed for secret management. Sensitive data would be stored in plaintext or require custom encryption, and access control is less granular than Key Vault.
C. an Appsettings.json file / E. a Web.config file:
These are application configuration files. Storing secrets in these files, especially within source control or a deployment artifact, is an insecure practice as they are typically in plaintext and would be visible to anyone with access to the deployment package or logs.
Reference:
Microsoft Learn - "Use Azure Key Vault to pass secure parameter value during deployment". It explicitly details how to reference secrets from Key Vault in ARM templates to keep sensitive values secure and out of deployment files and history.
You have a project in Azure DevOps named Project1. You implement a Continuous Integration/Continuous Deployment (CI/CD) pipeline that uses PowerShell Desired State Configuration (DSC) to configure the application infrastructure. You need to perform a unit test and an integration test of the configuration before Project1 is deployed. What should you use?
A. the PS Script Analyzer too
B. the Pester test framework
C. the PS Code Health module
D. the Test-Ds Configuration cmdlet
Explanation:
This question is about testing infrastructure configurations defined as code, specifically PowerShell Desired State Configuration (DSC). The requirement is to perform unit tests (testing logic in isolation) and integration tests (testing interactions between components) on the DSC configuration before deployment. This calls for a dedicated testing framework that can validate both the structure of the configuration and its execution behavior.
Correct Option:
B. the Pester test framework
Pester is the standard and widely adopted testing and mocking framework for PowerShell.
It is specifically designed to write and run unit and integration tests for PowerShell code, including DSC configurations.
You can write Pester tests to validate the logic of your DSC configuration (unit) and to test if the configuration applies correctly to a node (integration), making it the correct tool for a CI/CD pipeline validation step.
Incorrect Option:
A. the PS Script Analyzer tool:
This is a static code analysis tool (PSScriptAnalyzer). It checks PowerShell scripts for best practices, style violations, and potential problems but does not execute the code to perform functional unit or integration tests.
C. the PS Code Health module:
This is related to code metrics and analysis, similar to Script Analyzer. It assesses code quality, complexity, and maintainability but does not execute tests to verify the functional correctness or integration behavior of a DSC configuration.
D. the Test-DscConfiguration cmdlet:
This is a DSC-specific cmdlet used to check if a node is currently in the desired state defined by a previously applied configuration. It is an operational compliance check, not a pre-deployment unit or integration test of the configuration script itself.
Reference:
Microsoft Learn - "Testing DSC configurations with Pester". Official guidance recommends using the Pester framework to test DSC configurations by writing unit tests for the configuration's logic and integration tests to verify its application, which aligns with CI/CD best practices.
You are deploying a server application that will run on a Server Core installation of
Windows Server 2019.
You create an Azure key vault and a secret.
You need to use the key vault to secure API secrets for third-party integrations.
Which three actions should you perform? Each correct answer presents part of the
solution.
NOTE: Each correct selection is worth one point.
D18912E1457D5D1DDCBD40AB3BF70D5D
A. Configure RBAC for the key vault
B. Modify the application to access the key vault.
C. Configure a Key Vault access policy.
D. Deploy an Azure Desired State Configuration (DSC) extension.
E. Deploy a virtual machine that uses a system-assigned managed identity.
Explanation:
This scenario requires securing API secrets for an application running on a Windows Server 2019 VM by using Azure Key Vault. The core challenge is enabling the server application to retrieve secrets from Key Vault securely without storing credentials in its configuration. This requires setting up a secure access method for the VM/application and configuring Key Vault permissions.
Correct Option:
The correct actions are B, C, E.
B. Modify the application to access the key vault:
The application code must be updated to use the Azure Key Vault SDK or REST API to retrieve secrets at runtime. This is the end goal—having the app fetch secrets securely from the vault instead of from a local file.
C. Configure a Key Vault access policy:
Key Vault uses access policies (the traditional model) to grant permissions (like Get for secrets) to specific Azure AD identities. You must create a policy that grants the VM's managed identity the necessary permissions to access the secrets.
E. Deploy a virtual machine that uses a system-assigned managed identity:
A system-assigned managed identity gives the VM its own Azure AD identity. This identity is the secure credential the application will use to authenticate to Azure Key Vault, eliminating the need to manage passwords or service principals in the app configuration.
Incorrect Option:
A. Configure RBAC for the key vault:
Azure RBAC (role-based access control) is an alternative permission model for Key Vault, but it primarily manages control plane operations (like managing the vault itself). For granting secret access (data plane), the traditional access policy model (Option C) is typically used, especially when integrating with managed identities, unless you explicitly choose to use the RBAC permission model.
D. Deploy an Azure Desired State Configuration (DSC) extension:
Azure DSC is used for configuring and maintaining the desired state of the VM's OS and software. It is not directly involved in the process of enabling an application to retrieve runtime secrets from Key Vault. The secret retrieval is an application runtime concern, not a server configuration state concern.
Reference:
Microsoft Learn - "Use a Windows VM system-assigned managed identity to access Azure Key Vault". This tutorial outlines the exact steps: enabling a system-assigned identity for the VM (E), granting it access via a Key Vault access policy (C), and then using the identity within an application to retrieve a secret (B).
You manage build pipelines and deployment pipelines by using Azure DevOps. Your company has a team of 500 developers. New members are added continual lo the team. You need to automate me management of users and licenses whenever possible Which task must you perform manually?
A. modifying group memberships
B. procuring licenses
C. adding users
D. assigning entitlements
Explanation:
This question addresses Azure DevOps user lifecycle management and automation capabilities. While Azure DevOps supports automation for many user management tasks via Azure AD groups and REST APIs, one critical aspect is inherently a procurement and financial process outside the scope of technical automation tools. The task of obtaining the actual licenses (purchasing/subscribing) must be handled manually or through a separate procurement system.
Correct Option:
B. procuring licenses
Procuring licenses is a business and financial purchasing activity. It involves budget approval, ordering, and managing a subscription with Microsoft.
This process cannot be automated by Azure DevOps tasks or pipelines, as it requires interaction with sales, finance, and the Microsoft commerce platform.
Once licenses are procured, their assignment and user management can be automated.
Incorrect Option:
A. modifying group memberships:
This can be fully automated. By syncing Azure DevOps groups with Azure AD groups, user additions/removals in Azure AD automatically update memberships. PowerShell scripts or the Azure DevOps REST API can also manage groups programmatically.
C. adding users:
This can be automated. Users can be added in bulk via Azure AD (which syncs to Azure DevOps) using CSV imports, PowerShell, Microsoft Graph API, or automated provisioning tools.
D. assigning entitlements:
This can be automated. Entitlements (licenses) can be assigned automatically based on Azure AD group membership using group-based licensing. When a user is added to a synced Azure AD group, their Azure DevOps access level (entitlement) is assigned automatically.
Reference:
Microsoft Learn - "About access levels" and "Large-scale user management." Documentation explains that while you can automate user addition and license assignment via group-based licensing, you must first procure the licenses (e.g., Basic, Test Plans) for your organization through the Microsoft licensing agreement or Azure subscription.
You have a Microsoft ASP.NET Core web app in Azure that is accessed worldwide. You need to run a URL ping test once every five minutes and create an alert when the web app is unavailable from specific Azure regions. The solution must minimize development time. What should you do?
A. Create an Azure Application Insights availability test and alert
B. Create an Azure Service Health alert for the specific regions.
C. Create an Azure Monitor Availability metric and alert
D. Write an Azure function and deploy the function to the specific regions.
Explanation:
This scenario requires monitoring the availability of a global web app by performing regular synthetic tests (URL pings) from specific geographic regions and alerting on failure. The key requirements are multi-geo URL ping tests and minimizing development time. The solution must be a managed service requiring no custom code.
Correct Option:
A. Create an Azure Application Insights availability test and alert
Application Insights provides a built-in availability test feature (also called URL ping test or standard test).
It allows you to create a test that requests a URL from multiple global test locations at a defined frequency (as low as every 5 minutes).
You can select specific regions as test sources. It automatically detects failures and can trigger alerts via Azure Monitor, requiring zero development effort.
Incorrect Option:
B. Create an Azure Service Health alert for the specific regions:
Service Health tracks the status of Azure services and platform issues (e.g., outages). It does not monitor the availability or performance of your specific application endpoint from different geographies.
C. Create an Azure Monitor Availability metric and alert:
Azure Monitor itself does not have a native "Availability metric" for web apps that performs external multi-region ping tests. The platform metrics for App Service (like HTTP 5xx errors) measure internal server errors, not end-to-end availability from global client perspectives.
D. Write an Azure function and deploy the function to the specific regions:
While technically possible, this involves significant development time (writing, deploying, and maintaining code in multiple regions) and building a custom alerting mechanism. This violates the requirement to minimize development effort.
Reference:
Microsoft Learn - "Monitor availability and responsiveness of any website". This documentation details how to set up Application Insights availability tests to test your application from multiple points around the world and configure alerts, meeting all specified requirements.
Your company has a project in Azure DevOps for a new web application. The company identifies security as one of the highest priorities. You need to recommend a solution to minimize the likelihood that infrastructure credentials will be leaked. What should you recommend?
A. Add a Run Inline Azure PowerShell task to the pipeline
B. Add a PowerShell task to the pipeline and run Set-AzureKeyVaultSecret
C. Add a Azurre Key Vault task to the pipeline
D. Add Azure Key Vault references to Azure Resource Manger templates.
Explanation:
This question focuses on preventing credential leakage in a CI/CD pipeline. The goal is to minimize the likelihood that sensitive infrastructure credentials (like service principal secrets or connection strings) are exposed in logs or code. The best practice is to avoid handling secrets directly in tasks, and instead use a secure secret store where the pipeline retrieves them as needed without exposing their values.
Correct Option:
D. Add Azure Key Vault references to Azure Resource Manager templates.
This method keeps secrets entirely out of the pipeline. The ARM template references a secret stored in Azure Key Vault using a special syntax (e.g., "reference": {"keyVault": {...}}).
During deployment, the Azure Resource Manager service retrieves the secret directly from Key Vault. The secret value is never passed through the pipeline tasks, logs, or parameter files, eliminating the primary leakage points.
Incorrect Option:
A. Add a Run Inline Azure PowerShell task to the pipeline:
Running inline PowerShell often involves writing secrets directly into the task script or as plaintext variables, which is a high-risk practice as they can be exposed in logs and are stored in the pipeline definition.
B. Add a PowerShell task to the pipeline and run Set-AzureKeyVaultSecret:
The Set-AzureKeyVaultSecret cmdlet is used to insert or update a secret in Key Vault. Using this in a pipeline would require having the secret value available to push, which defeats the purpose. It manages the secret in the vault but does not securely use it for deployment.
C. Add an Azure Key Vault task to the pipeline:
While this task can securely retrieve secrets from Key Vault and map them to pipeline variables, the secrets are still exposed as variables within the pipeline context. If accessed incorrectly (e.g., echoed in a log), they could be leaked. The ARM template reference method is more secure as the secret never enters the pipeline's variable space.
Reference:
Microsoft Learn - "Use Azure Key Vault to pass secure parameter value during deployment". This details the most secure pattern: using Azure Key Vault linked templates where secret values are never revealed in deployment logs, command outputs, or templates, which directly minimizes credential leakage.
| Page 1 out of 41 Pages |