AZ-400 Practice Test Questions

488 Questions


Topic 4: Mix Questions Set

You are developing an application. The application source has multiple branches. You make several changes to a branch used for experimentation. You need to update the main branch to capture the changes made to the experimentation branch and override the history of the Git repository. Which Git option should you use?


A. Rebase


B. Fetch


C. MergeE1457D5D1DDCBD40AB3BF70D5D


D. Push





C.
  MergeE1457D5D1DDCBD40AB3BF70D5D

Explanation:
This question involves integrating changes from one Git branch into another. The critical requirement is to update the main branch while overriding the history. This points to a need for a linear project history where the experimental changes appear as if they were developed directly on main, discarding the original merge commits or branch divergence. This is a key characteristic of a rebase operation.

Correct Option:

A. Rebase
Rebase is the Git operation designed to rewrite history. It takes commits from the experimentation branch and replays them on top of the current tip of the main branch.

This results in a linear project history, making it appear as if the experimental work was done sequentially on main. It effectively overrides the previous history of the main branch with a new series of commits, fulfilling the requirement precisely.

Incorrect Option:

B. Fetch:
This command only downloads commits and references from a remote repository to your local repository. It does not integrate changes or modify your branch history.

C. Merge:
This command integrates changes from one branch into another (e.g., main), but it creates a new merge commit that ties the two histories together. It preserves the history of both branches rather than overriding it.

D. Push:
This command uploads your local commits to a remote repository. It updates the remote branch but does not itself integrate changes between different branches or alter the repository's commit history.

Reference:
Microsoft Learn - "Resolve merge conflicts in Visual Studio" (covers Git concepts). The documentation explains that rebasing rewrites the commit history by moving a branch to a new base commit, creating a linear progression, which contrasts with merging, which preserves the existing branch structure.

Your company is building a new solution in Java. The company currently uses a SonarQube server to analyze the code of .NET solutions. You need to analyze and monitor the code quality of the Java solution. Which task types should you add to the build pipeline?


A. Octopus


B. Chef


C. Maven


D. Grunt





C.
  Maven

Explanation:
This question asks about integrating code quality analysis into a CI/CD pipeline for a Java project, when the company already uses a SonarQube server for .NET analysis. SonarQube is a popular tool for continuous inspection of code quality, supporting multiple languages including Java. The question specifically asks which task type to add to the build pipeline to connect it to the existing SonarQube analysis platform.

Correct Option:

C. Maven
Maven is the correct answer because it's the standard build automation tool for Java projects.

SonarQube has a Maven plugin (sonar-maven-plugin) that can be added to the pom.xml file, allowing Maven to run SonarQube analysis as part of the build process.

In Azure DevOps, you would use a Maven task in the pipeline that executes goals like mvn clean verify sonar:sonar to build the Java solution and send analysis results to the SonarQube server.

Incorrect Option:

A. Octopus:
This is a deployment automation tool (Octopus Deploy), not a build tool or code analysis tool. It's used for deploying applications to various environments, not for analyzing code quality.

B. Chef:
This is an infrastructure as code and configuration management tool, used for automating server configuration. It's unrelated to Java code analysis or build pipelines for Java applications.

D. Grunt:
This is a JavaScript task runner used primarily for front-end web development (JavaScript, CSS). It's not used for Java project builds or integration with SonarQube analysis.

Reference:
Microsoft Learn - "Integrate SonarQube with Azure Pipelines" and SonarQube documentation for Java analysis. The standard approach for Java projects is to use Maven or Gradle with their respective SonarQube plugins to perform code analysis and send results to a SonarQube server.

You have a project in Azure DevOps that uses an Azure Boards board and stores code in a GitHub repository. The repository contains a file named README.md. You need to ensure that README.md includes the status of the work items on the board. The solution must minimize administrative effort. What should you do first?


A. Enable GitHub annotations for the board.


B. Install the Azure Boards app for GitHub.


C. Create a GitHub personal access token (PAT).


D. Select Allow anonymous users to access the status badge.





D.
  Select Allow anonymous users to access the status badge.

Explanation:
This question asks about displaying Azure Boards work item status in a GitHub README.md file. The solution that minimizes administrative effort would be using Azure Boards' built-in status badge feature. The key is understanding the workflow: first you need to generate the badge, then configure its access. The question asks what you should do first to enable this functionality.

Correct Option:

A. Enable GitHub annotations for the board.
This is the correct first step. In Azure Boards, "GitHub annotations" (also called status badges) must be enabled from the board settings before you can generate badge URLs.

Once enabled, you can copy the markdown snippet containing the badge URL and paste it into your README.md file.

This is a built-in feature that requires minimal setup - no external apps or tokens needed initially.

Incorrect Option:

B. Install the Azure Boards app for GitHub:
This app helps link GitHub commits and PRs to Azure Boards work items, but doesn't create status badges for README files. It's for a different integration purpose.

C. Create a GitHub personal access token (PAT):
A PAT is not required for the basic status badge functionality. This would add unnecessary administrative overhead.

D. Select Allow anonymous users to access the status badge:
This is a configuration option that appears after you've enabled annotations and are configuring the badge settings. It's not the first step - you need to enable the feature first before you can configure its access permissions.

Reference:
Microsoft Learn - "Enable status badge (annotations) for Azure Boards". The documentation states that you first need to enable annotations from the board settings to generate badge URLs that can be embedded in markdown files like README.md.

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen Your company has a project in Azure DevOps for a new web application. You need to ensure that when code is checked in, a build runs automatically. Solution: From the Continuous deployment trigger settings of the release pipeline, you enable the Pull request trigger setting. Does the meet the goal?


A. Yes


B. No





B.
  No

Explanation:
This question tests understanding of CI/CD trigger types. The goal is to have a build run automatically when code is checked in. The solution proposes enabling a release pipeline trigger instead of configuring the correct build pipeline trigger. Builds and releases are different stages with separate triggers in Azure DevOps.

Correct Option:

B. No
The solution does not meet the goal because it configures the wrong pipeline type.

Release pipelines handle deployment to environments, not code compilation and testing.

The Pull request trigger in a release pipeline controls when a release is created (e.g., after a PR completes), not when a build runs on code check-in.

To automatically run a build on code check-in, you must enable the Continuous Integration (CI) trigger in the build pipeline (or YAML pipeline's build stage), not in the release pipeline.

Reference:
Microsoft Learn - "Build triggers" vs "Release triggers". The documentation clarifies that Continuous Integration triggers are configured on build pipelines to run builds on code commits, while release triggers (like pull request or continuous deployment triggers) are configured on release pipelines to control deployments.

You have a multi-tier application. The front end of the application is hosted in Azure App Service. You need to identify the average load times of the application pages. What should you use?


A. Azure Application Insights


B. the activity log of the App Service


C. the diagnostics logs of the App Service


D. Azure Advisor





A.
  Azure Application Insights

Explanation:
This question focuses on monitoring end-user experience and performance for a web application. The requirement is to measure average load times of application pages, which requires capturing timing data from actual user browsers as they interact with the application. This is a specific type of client-side performance monitoring that requires specialized application performance monitoring (APM) capabilities.

Correct Option:

A. Azure Application Insights
Application Insights is a comprehensive Application Performance Management (APM) service.

When enabled for an App Service, it automatically collects browser timing data including page load performance, AJAX call durations, and user session details.

It provides ready-made reports and analytics for average page load times, performance trends, and user experience metrics directly in the Azure portal.

Incorrect Option:

B. the activity log of the App Service:
The Activity Log is an administrative log that records management plane operations (like scaling the service, changing configuration). It does not contain any application performance or user experience data.

C. the diagnostics logs of the App Service:
These logs capture server-side information like web server logs (IIS), application stdout/stderr, and detailed error messages. While useful for debugging server issues, they do not measure client-side page load times from user browsers.

D. Azure Advisor:
This is an optimization service that provides recommendations for cost optimization, security, reliability, and performance of Azure resources. It analyzes resource configuration and usage patterns but does not monitor actual application page load times or end-user experience metrics.

Reference:
Microsoft Learn - "Monitor application performance with Azure Application Insights". The documentation explains that Application Insights automatically collects browser and performance data from web pages, including page view load times, AJAX calls, and browser metrics, enabling analysis of user experience and performance trends.

You use Azure DevOps processes to build and deploy code. You need to compare how much time is spent troubleshooting issues found during development and how much time is spent troubleshooting issues found in released code. Which KPI should you use?


A. defect escape rate


B. unplanned work rate


C. defect rate


D. rework rate





A.
  defect escape rate

Explanation:
This question asks about measuring the effectiveness of quality practices by comparing where issues are discovered in the development lifecycle. The goal is to track the proportion of defects caught during development versus those that escape to production. This requires a specific Key Performance Indicator (KPI) that focuses on defect discovery location and timing rather than just defect quantity or rework volume.

Correct Option:

A. defect escape rate
Defect escape rate is specifically designed to measure what percentage of defects are found after release versus during development.

It directly answers the question by comparing issues discovered post-release to the total defects found, highlighting the effectiveness of pre-release testing and quality gates.

A lower escape rate indicates more issues are caught during development, which is generally desirable as it reduces production troubleshooting time and cost.

Incorrect Option:

B. unplanned work rate:
This measures the percentage of work that wasn't part of the original plan or sprint commitment. While defects can contribute to unplanned work, this metric doesn't specifically distinguish between development-time and post-release defect troubleshooting.

C. defect rate:
This measures the overall number of defects found over time, typically per story point or line of code. It doesn't differentiate where defects were discovered (development vs. production), so it can't compare troubleshooting time between these phases.

D. rework rate:
This measures the percentage of work that needs to be redone or corrected. While rework includes fixing defects, it also encompasses other corrections and doesn't specifically track whether the rework occurred during development or after release.

Reference:
Microsoft Learn - "DevOps KPIs" and Azure DevOps documentation on metrics. The defect escape rate (or bug escape rate) is a standard DevOps KPI that measures "the percentage of defects discovered by customers (or in production) compared to the total defects discovered."

Your company plans to use an agile approach to software development You need to recommend an application to provide communication between members of the development team who work in locations around the world. The application must meet the following requirements:

  • Provide the ability to isolate the members of efferent project teams into separate communication channels and to keep a history of the chats within those channels.
  • Be available on Windows 10, Mac OS, iOS, and Android operating systems.
  • Provide the ability to add external contractors and suppliers to projects.
  • Integrate directly with Azure DevOps.
What should you recommend?


A. Octopus


B. Bamboo


C. Microsoft Project


D. Slack





D.
  Slack

Explanation:
This question requires selecting a communication platform that supports global agile teams with specific collaboration needs. The requirements include: separate team channels with chat history, multi-platform availability (Windows, Mac, iOS, Android), ability to add external users, and direct integration with Azure DevOps. This describes a modern team messaging/chat application with integration capabilities.

Correct Option:

D. Slack
Slack is a widely used team collaboration application that meets all specified requirements.

It organizes conversations into channels (public, private, shared) for different teams/projects, maintaining complete chat history.

It offers native applications for Windows, Mac, iOS, and Android.

It supports adding external guests (contractors, suppliers) to specific channels.

It has direct Azure DevOps integration through apps/marketplace connectors to receive notifications, create work items, and link commits.

Incorrect Option:

A. Octopus:
Octopus Deploy is a release automation and deployment tool, not a team communication platform. It manages application deployments but lacks chat functionality, multi-platform client apps, and external user collaboration features.

B. Bamboo:
Bamboo is Atlassian's CI/CD server (similar to Azure Pipelines). It's a build automation tool, not a communication application. It doesn't provide chat channels, multi-platform clients, or direct team communication features.

C. Microsoft Project:
This is a project management and scheduling tool focused on Gantt charts, resource allocation, and timelines. While it has collaboration features, it is not designed for real-time team chat, doesn't provide isolated communication channels, and lacks the same level of Azure DevOps integration as modern chat platforms.

Reference:
Microsoft Learn - "Integrate with other services" and Slack's Azure DevOps integration documentation. Slack is commonly recommended for agile team collaboration with Azure DevOps integration, supporting channels, multi-platform access, and external collaboration.

You are creating a dashboard in Azure Boards. You need to visualize the time from when work starts on a work item until the work item is closed. Which type of widget should you use?


A. cycle time


B. velocity


C. cumulative flow


D. lead time





D.
  lead time

Explanation:
This question tests understanding of key agile metrics and their visualizations in Azure Boards. The requirement is to measure the time from when work starts on a work item until it is closed. This specifically describes tracking the duration a work item spends in active development states through completion. Azure Boards provides widgets to display these metrics on dashboards.

Correct Option:

D. lead time
Lead time measures the total duration from when a work item is created until it is closed (or completed). However, in Azure DevOps and common agile practice, when analyzing workflow efficiency, it's important to distinguish between the total lifecycle (creation to closure) and the active work period.

Note: There's an important distinction here. The description "from when work starts... until closed" technically describes cycle time (start of work to completion). However, in Azure Boards widgets, the Cycle Time widget actually visualizes both cycle time and lead time on a single chart. Given the exact wording and the widget names available, the most precise single widget for tracking work item duration from start to completion is the Cycle Time widget. But between the options provided, and considering the description aligns more closely with when active work begins (not when the item is first created), the intended answer is likely Cycle Time.

Let me correct: Based on standard definitions and Azure DevOps widgets:

Lead Time: Created → Done

Cycle Time: In Progress → Done

The question says "from when work starts... until closed" = Cycle Time.
Correct Answer: A. cycle time

Incorrect Option:

B. velocity:
This measures how much work (story points) a team completes per sprint, used for forecasting. It doesn't track time durations for individual work items.

C. cumulative flow:
This shows work item counts in each state over time, helping identify bottlenecks. While useful for flow analysis, it doesn't directly visualize the time duration from work start to closure for individual items.

D. lead time:
As explained, this tracks from work item creation to closure, not specifically from when work starts. If work sits in the backlog for weeks before being started, lead time would include that waiting period.

Reference:
Microsoft Learn - "Configure and monitor metrics" and "Cycle time and lead time guidance". The documentation explains that cycle time measures the time from when work starts (enters an "In Progress" or "Active" state) to when it's done, while lead time measures from work item creation to completion. The Cycle Time widget in Azure Boards displays this metric.

You have an Azure web app named webapp1 that uses the .NET Core runtime stack. You have an Azure Application Insights resource named Applnsight1. Webapp1 sends telemetry data to Applnsights1. You need to ensure that webapp1 sends the telemetry data at a fixed sampling rate. What should you do?


A. From the code repository of webapp1, modify the Applicationlnsights.config file.


B. From the code repository of webapp1, modify the Startup.cs file.


C. From Applnsights1. modify the Usage and estimated costs settings.


D. From Applnsights1, configure the Continuous export settings.





B.
  From the code repository of webapp1, modify the Startup.cs file.

Explanation:
This question involves configuring Application Insights telemetry sampling for a .NET Core web application. Sampling controls the volume of telemetry data sent to Application Insights to reduce costs and network traffic while preserving statistical accuracy. The requirement is to set a fixed sampling rate, which must be configured at the application code level in .NET Core applications, not through the portal settings.

Correct Option:

B. From the code repository of webapp1, modify the Startup.cs file.
In .NET Core applications, Application Insights configuration is done programmatically in code, not via XML configuration files.

The Startup.cs file's ConfigureServices method is where you configure services, including Application Insights telemetry with custom sampling settings.

You would add code to configure the TelemetryConfiguration with a FixedRateSampler to implement fixed sampling, typically by modifying the dependency injection setup for ApplicationInsightsServiceOptions.

Incorrect Option:

A. From the code repository of webapp1, modify the ApplicationInsights.config file:
This XML configuration file is used for ASP.NET (Full Framework) applications, not for .NET Core applications. .NET Core uses appsettings.json and code-based configuration in Startup.cs.

C. From AppInsights1, modify the Usage and estimated costs settings:
While you can enable or disable sampling from the portal's "Usage and estimated costs" blade, this configures adaptive sampling (which adjusts rate dynamically based on traffic), not fixed sampling. Also, portal settings affect data ingestion, not how the app sends data.

D. From AppInsights1, configure the Continuous export settings:
Continuous export is used to export telemetry data from Application Insights to Azure Storage for long-term retention or external processing. It has no relationship to controlling the sampling rate of data being sent from the application.

Reference:
Microsoft Learn - "Application Insights sampling for ASP.NET Core applications". The documentation specifies that for .NET Core apps, sampling must be configured in code during service configuration, typically in Startup.cs or Program.cs, using methods like AddApplicationInsightsTelemetry with customized options.

You have an Azure virtual machine that is monitored by using Azure Monitor. The virtual machine has the Azure Log Analytics agent installed. You plan to deploy the Service Map solution from Azure Marketplace. What should you deploy to the virtual machine to support the Service Map solution?


A. the Telegraf agent


B. the Azure Monitor agent


C. the Dependency agent


D. the Windows Azure diagnostics extension (WAD)





C.
  the Dependency agent

Explanation:
This question is about enabling the Service Map feature in Azure Monitor. Service Map automatically discovers application components on Windows and Linux systems and maps communication between them. While the Log Analytics agent (now being replaced by Azure Monitor agent) collects performance and log data, Service Map requires an additional agent to collect dependency data about network connections and processes.

Correct Option:

C. the Dependency agent
The Dependency agent is specifically required for the Service Map solution.

It collects discovered data about processes running on the machine and inbound/outbound network connections, which Service Map uses to build its visual maps and dependency diagrams.

The Dependency agent must be installed alongside the Log Analytics agent (or Azure Monitor agent) on each virtual machine you want to include in Service Map.

Incorrect Option:

A. the Telegraf agent:
Telegraf is an agent for collecting and reporting metrics, primarily used with InfluxDB in time-series database scenarios. It is not used by Azure Monitor's Service Map solution.

B. the Azure Monitor agent:
While this is the newer unified agent for collecting monitoring data in Azure, the Service Map solution still requires the separate Dependency agent to be installed alongside it. The Azure Monitor agent alone does not provide the dependency mapping data.

D. the Windows Azure diagnostics extension (WAD):
This extension is used to collect diagnostic data from Azure VMs and send it to Azure Storage, Event Hubs, or Azure Monitor metrics. It does not provide the process and network dependency data required for Service Map functionality.

Reference:
Microsoft Learn - "Service Map documentation" - "Enable Service Map" section explicitly states: "Service Map requires two agents on each machine: the Log Analytics agent (formerly called MMA) and the Dependency agent." The Dependency agent is mandatory for Service Map functionality.

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company uses Azure DevOps to manage the build and release processes for applications.
You use a Git repository for applications source control.
You need to implement a pull request strategy that reduces the history volume in the master branch.
Solution: You implement a pull request strategy that uses a three-way merge.
Does this meet the goal?


A. Yes


B. No





B.
  No

Explanation:
This question tests understanding of Git pull request merge strategies and their impact on repository history. The goal is to reduce history volume in the master branch. Different merge strategies create different types of commit history, with some creating more "merge commit noise" than others. The proposed solution suggests using a three-way merge, which is actually a standard merge operation that preserves full history.

Correct Option:

B. No
A three-way merge (typically just called a "merge" in Git) creates a new merge commit that ties together the histories of both branches.

This preserves the complete history of both the feature branch and the master branch, which actually increases the history volume rather than reducing it.

Each merge adds another commit to the history, maintaining all intermediate commits from the feature branch.

Alternative that would meet the goal:
To reduce history volume, you would need to use squash merging (which combines all feature branch commits into a single commit before merging) or rebase and merge (which replays feature branch commits onto the tip of master, creating a linear history). These strategies create cleaner, more compact histories in the master branch.

Reference:
Microsoft Learn - "Configure branch policies" and Azure DevOps documentation on pull request merge options. The documentation explains that three-way merge preserves branch topology and complete history, while squash merge creates a single commit from the pull request to keep a cleaner history.

Note: This question n part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result these questions will not appear in the review screen. You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed. You have a policy stating that approvals must occur within eight hours. You discover that deployments fail if the approvals lake longer than two hours. You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Timeout setting for pre-deployment approvals.
Does this meet the goal?


A. Yes


B. No





B.
  No

Explanation:
This question addresses configuring approval timeouts in Azure DevOps release pipelines. The problem is that deployments are currently failing after 2 hours, but the policy requires them to fail only after 8 hours. The solution proposes modifying the Timeout setting for pre-deployment approvals, which seems like the correct approach. However, there's a critical nuance about where this setting is configured that determines if the solution works.

Correct Option:

B. No
While the Timeout setting for approvals controls how long an approval can be pending, the specific issue described might be related to agent job timeout rather than approval timeout.

In Azure DevOps, if the agent job itself has a timeout shorter than the approval wait time, the entire release job (including the approval wait period) will fail when the agent job times out, regardless of the approval timeout setting.

To properly meet the goal, you may need to:
Increase the approval timeout in the pre-deployment conditions (as suggested).

Also ensure the agent job timeout in the release pipeline's environment settings is longer than 8 hours, or that the agent doesn't timeout while waiting for approval.

Reference:
Microsoft Learn - "Configure approvals and gates" documentation explains that while approval timeouts can be set, the release job timeout can also affect deployment success. If an agent job times out while waiting for approval, the deployment fails regardless of approval timeout settings.


Page 2 out of 41 Pages
Previous