PL-300 Practice Test Questions

290 Questions


Topic 2, Contoso Ltd, Case Study

   

Overview
This is a case study. Case studies are not timed separately. You can use as much exam
time as you would like to complete each case. However, there may be additional case
studies and sections on this exam. You must manage your time to ensure that you are able
to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the
left pane to explore the content of the case study before you answer the questions. Clicking
these buttons displays information such as business requirements, existing environment
and problem statements. If the case study has an All Information tab, note that the
information displayed is identical to the information displayed on the subsequent tabs.
When you are ready to answer a question, click the Question button to return to the
question.
Existing Environment
Contoso, Ltd. is a manufacturing company that produces outdoor equipment Contoso has
quarterly board meetings for which financial analysts manually prepare Microsoft Excel
reports, including profit and loss statements for each of the company's four business units,
a company balance sheet, and net income projections for the next quarter.
Data and Sources
Data for the reports comes from three sources. Detailed revenue, cost and expense data
comes from an Azure SQL database. Summary balance sheet data comes from Microsoft
Dynamics 365 Business Central. The balance sheet data is not related to the profit and
loss results, other than they both relate to dates.
Monthly revenue and expense projections for the next quarter come from a Microsoft
SharePoint Online list. Quarterly projections relate to the profit and loss results by using the
following shared dimensions: date, business unit, department, and product category.
Net Income Projection Data
Net income projection data is stored in a SharePoint Online list named Projections in the
format shown in the following table.

Which two types of visualizations can be used in the balance sheet reports to meet the reporting goals? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.


A. a line chart that shows balances by quarter filtered to account categories that are longterm liabilities.


B. a clustered column chart that shows balances by date (x-axis) and account category (legend) without filters.


C. a clustered column chart that shows balances by quarter filtered to account categories that are long-term liabilities.


D. a pie chart that shows balances by account category without filters.


E. a ribbon chart that shows balances by quarter and accounts in the legend





A.
  a line chart that shows balances by quarter filtered to account categories that are longterm liabilities.

C.
  a clustered column chart that shows balances by quarter filtered to account categories that are long-term liabilities.

Explanation:
The reporting goal for a balance sheet analysis, especially concerning long-term liabilities, is to show trends over time. Stakeholders need to see how these specific account balances have changed from one period to the next (e.g., quarter-to-quarter) to analyze debt repayment schedules, financial health, and leverage.
Let's analyze why these two visualizations are effective and why the others are not optimal for this specific goal.

Options A and C are Correct:
Both of these visualizations share the two critical components needed to meet the reporting goal:
They are filtered to the specific account categories of interest:
"long-term liabilities." This focuses the analysis on the relevant data.
They show data by quarter:
This provides the necessary time-series analysis to observe trends, increases, decreases, and patterns over multiple periods.

A. Line Chart:
This is the ideal visualization for showing trends over time. The connecting lines make it easy for the eye to follow the progression of each account category's balance across quarters.

C. Clustered Column Chart:
This is also very effective for comparing the values of different long-term liability accounts side-by-side for each quarter and for seeing the changes from one quarter to the next.

Why the Other Options Are Incorrect:
B. a clustered column chart that shows balances by date and account category without filters.
Incorrect: Showing the entire, unfiltered balance sheet (all assets, liabilities, and equity) in a single column chart by date would be far too cluttered and complex. It would be impossible to extract meaningful insights about long-term liabilities specifically. The lack of filtering makes this visualization unsuitable for the focused reporting goal.

D. a pie chart that shows balances by account category without filters.
Incorrect:A pie chart shows the proportion of parts to a whole at a single point in time. It is completely static and cannot show trends or changes over multiple quarters. It also suffers from the same problem as option B—it shows the entire balance sheet without focus, making it difficult to analyze the specific component of long-term liabilities.

E. a ribbon chart that shows balances by quarter and accounts in the legend.
Incorrect: While a ribbon chart can show data over time, its primary strength is illustrating how the ranking of categories changes. It is excellent for seeing which category is the "top" category in each period. For balance sheet amounts, the focus is on the absolute value and trend of specific accounts (like long-term liabilities), not on their changing rank against other accounts like cash or equity. It is a less effective and more specialized choice compared to a standard line or column chart for this scenario.

Reference:
Core Concept:
This question tests the knowledge of selecting appropriate visualizations based on the analytical goal. The key concepts are: Time-series analysis requires visuals that show data across a time axis (e.g., Line Charts, Column Charts). Filtering and Focus is necessary to avoid clutter and present clear, actionable insights for a specific business question.

You need to calculate the last day of the month in the balance sheet data to ensure that you can relate the balance sheet data to the Date table. Which type of calculation and which formula should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.







Explanation:
The goal is to create a date key that can form a proper relationship with a Date table. This is a data preparation task that should be done as early in the process as possible.

1. Why "An M custom column" is Correct:
Data Preparation vs. Data Modeling:
This task is about preparing the source data. Power Query (which uses the M language) is the ETL tool in Power BI designed for this exact purpose—data transformation and shaping before it is loaded into the data model.
Efficiency and Best Practice:
Creating this column in Power Query (M) is a one-time operation during data refresh. The resulting date is stored in the model, making relationships and filtering very efficient.
Why not a DAX calculated column? While a DAX calculated column could achieve a similar result, it is calculated after the data is loaded into the model and consumes precious VertiPaq (in-memory) engine resources. For a static, structural column like a date key, it is a best practice to create it in Power Query. DAX is better suited for dynamic, context-aware calculations (measures) or columns that depend on other model relationships.

2. Why Date.EndOfMonth(#date([Year], [Month], 1)) is Correct:
This M language formula breaks down into two logical steps:
#date([Year], [Month], 1):
This function constructs a valid Date value from the existing [Year] and [Month] columns. It uses the first day (1) of the month because we need a valid starting point. For example, for Year=2023 and Month=10, it creates the date October 1, 2023.
Date.EndOfMonth(...):
This function then takes that first-day date and returns the last day of its respective month. Continuing the example, it would take October 1, 2023 and return October 31, 2023.
This is the perfect result for relating to a Date table, as it gives a single, specific date for each Year/Month combination in your BalanceSheet data.

Why the Other Formulas Are Incorrect:
Date.EndOfQuarter(#date([Year], [Month], 1)):
This is incorrect because it returns the last day of the quarter, not the month. For example, for October 2023 (Q4), it would return December 31, 2023, which is not the correct ending date for the month of October.
ENDOFQUARTER(DATE('BalanceSheet'[Year],BalanceSheet[Month],1),0):
This is a DAX function, not an M function. It would be syntactically incorrect in a Power Query custom column. Furthermore, even in DAX, it suffers from the same logical flaw as the previous option—it calculates the end of the quarter, not the end of the month.

Reference:
Core Concept:
This question tests the understanding of the correct tool for the job: using Power Query (M) for data preparation and structural changes, and understanding basic M language date functions.

How should you distribute the reports to the board? To answer, select the appropriate
options in the answer area.
NOTE: Each correct selection is worth one point.






What is the minimum number of datasets and storage modes required to support the reports?


A. two imported datasets


B. a single DirectQuery dataset


C. two DirectQuery datasets


D. a single imported dataset





D.
  a single imported dataset

📘 Explanation:
Power BI supports combining data from multiple sources into a single imported dataset, which is the most efficient and flexible approach for report development. Import mode allows:
Integration of data from multiple tables and sources
Fast performance due to in-memory storage
Full support for modeling, DAX, and visuals
You do not need multiple datasets unless there are strict isolation or access control requirements. A single imported dataset can handle all reporting needs if properly modeled.

Reference:
🔗 Microsoft Learn – Understand dataset storage modes
🔗 Microsoft Fabric Community – Minimum number of datasets and storage mode

❌ Why other options are incorrect:
A. Two imported datasets
→ Unnecessary duplication. One well-designed dataset is sufficient.

B. A single DirectQuery dataset
→ Slower performance, limited modeling capabilities, and depends on source system availability.

C. Two DirectQuery datasets
→ Adds complexity and latency. Not needed unless source systems must remain live and isolated.

📘 Summary:
Use a single imported dataset to support multiple reports efficiently. It offers the best performance, flexibility, and simplicity for most reporting scenarios.

Once the profit and loss dataset is created, which four actions should you perform in
sequence to ensure that the business unit analysts see the appropriate profit and loss
data? To answer, move the appropriate actions from the list of actions to the answer area
and arrange them in the correct order.






Explanation:
This process implements Row-Level Security (RLS) to ensure that when a business unit analyst views a report, they only see data for their specific business unit.

Step 1: From Power BI Desktop, create four roles.
Explanation:
This is the foundational step. You must first define the security roles within the data model itself. Since there are four business units, you would create four corresponding roles (e.g., "Role_UnitA", "Role_UnitB", etc.). This is done in Power BI Desktop using the "Manage Roles" dialog box.
Why it must be first: You cannot configure security for roles that do not yet exist. All RLS logic is built upon these role definitions.

Step 2: From Power BI Desktop, add a Table Filter DAX Expression to the roles.
Explanation:
After creating a role, you must define its security rules. This is done by writing a DAX expression that filters the data. For example, for a role meant to see only data for "UnitA", the DAX expression on the relevant table would be [BusinessUnit] = "UnitA" or [BusinessUnit] = "UnitA". This expression is applied for any user assigned to that role.
Why it must be second:
The filter expression is the core of the security rule and is a property of the role. Therefore, the role must be created (Step 1) before you can assign a filter to it.

Step 3: From Power BI Desktop, publish the dataset to powerbi.com.
Explanation:
The roles and their RLS rules are defined in the .pbix file. To make these security definitions available in the cloud service where users access reports, you must publish the file from Power BI Desktop to a workspace in the Power BI service.
Why it must be third: The security model is part of the dataset. You must deploy the dataset to the service before you can manage user assignments to the roles you just created and configured.

Step 4: From powerbi.com, add role members to the roles.
Explanation:
The final step is to map individual users or security groups to the roles you created. This is done in the workspace settings in the Power BI service. You select the dataset, go to its Security settings, and assign the business unit analysts to their respective roles.
Why it must be last and is correct:
Until this step is completed, the RLS rules are defined but not enforced on any specific user. Assigning a user to a role activates the DAX table filter for that user whenever they interact with reports based on the dataset.

Why the Other Action is Incorrect
From powerbi.com, assign the analysts the Contributor role to the workspace.
Explanation:
This action controls permissions to the workspace (e.g., allowing users to publish reports or modify content). It is unrelated to Row-Level Security (RLS), which controls which rows of data a user can see within a report. A user can be a Contributor, Member, or Viewer and still have RLS applied to them. Assigning a workspace role does not filter the data they see.

Reference:
Core Concept: This question tests the procedural knowledge for implementing Row-Level Security (RLS) in Power BI, which involves both Desktop (model definition) and Service (user management) steps.

You need to create a solution to meet the notification requirements of the warehouse
shipping department.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct select is worth one point:







📘 Explanation:
To meet the notification requirements of the warehouse shipping department, the correct approach is to use a dashboard tile with a data alert. Power BI alerts can only be triggered from visuals pinned to dashboards—not from reports directly. This makes dashboards the required surface for alert-based automation.
Start by creating a card visualization that calculates the percentage of late orders. This visual should be pinned to a dashboard, which serves as the container for alert-enabled tiles. Once pinned, you can configure a data alert on the card tile to notify users when the percentage exceeds a defined threshold (e.g., 10%).
Data alerts in Power BI are designed for numeric tiles such as cards, KPIs, and gauges. They support email notifications and can be used to trigger flows in Power Automate for more advanced workflows. This setup ensures that warehouse staff are proactively notified when late orders spike, enabling timely operational response.

Reference:
🔗 Microsoft Learn – Set data alerts in Power BI
🔗 Microsoft Learn – Pin visuals to dashboards

❌ Why other options are incorrect:
Bookmark:
Bookmarks are used for navigation and storytelling within reports. They save the current view, filter state, and visual configuration but cannot trigger alerts. They are not tied to dashboards and have no notification capabilities.
Phone view:
This optimizes the layout of reports for mobile devices but does not support alert logic. It’s purely a formatting feature and has no role in data-driven notifications.
Report:
Alerts cannot be configured directly on report visuals. Power BI’s alert system is tied to dashboard tiles only, which means visuals must be pinned to a dashboard before alerts can be set. Attempting to configure alerts from within a report will not work.

📘 Summary:
To enable automated notifications for late orders, you must:
Create a card visual showing the percentage of late orders
Pin it to a dashboard
Configure a data alert on the dashboard tile

You need to design the data model and the relationships for the Customer Details
worksheet and the Orders table by using Power BI. The solution must meet the report requirements. For each of the following statement, select Yes if the statement is true, Otherwise, select No. NOTE: Each correct selection is worth one point.







Explanation:
Statement 1: A relationship must be created between the CustomerID column in the Customer Details worksheet and the CustomerID column in the Orders table.
Answer: Yes
Explanation:
his is a fundamental principle of the star schema design in Power BI. The Customer Details worksheet is a dimension table (containing descriptive attributes about customers), and the Orders table is a fact table (containing transactional data). To filter the order transactions by customer attributes or to bring customer details into an order-related report, a one-to-many relationship must exist between the dimension table (Customer Details[CustomerID]) and the fact table (Orders[CustomerID]). Without this relationship, the tables cannot interact, and the reporting requirements cannot be met.

Statement 2: The Data Type of the columns in the relationship between the Customer Details worksheet and the Orders table must be set to Text.
Answer: No
Explanation:
While the data types of the two columns used in a relationship must be identical, they do not have to be "Text." The CustomerID could logically be a Whole Number data type (an Integer). In fact, using a numeric data type is often more performant. The critical rule is data type consistency, not a specific data type. This statement is false because it incorrectly mandates the "Text" data type as the only option.

Statement 3: The Region field used to filter the Top Customers report must come from the Orders table.
Answer: No
Explanation:
This statement violates a core best practice of dimensional modeling. The Region is a descriptive attribute of a Customer. It should reside in the Customer Details dimension table, not in the Orders fact table. This design, known as "denormalization," allows for consistent filtering and prevents data redundancy and potential inconsistencies. If Region were stored in the Orders table, a single customer could potentially have multiple regions associated with it if data entry errors occurred, leading to inaccurate reports. Filtering by a column from the Customer Details table is not only possible but is the correct approach, as the relationship created in Statement 1 will propagate the filter from the dimension to the fact table.

Reference:
Core Concept: This question tests the understanding of dimensional modeling, specifically the principles of the star schema, the importance of correct data types for relationships, and the practice of storing descriptive attributes in dimension tables.

You need to create the dataset. Which dataset mode should you use?


A. DirectQuery


B. Import


C. Live connection


D. Composite





B.
  Import

Explanation:
The scenario requires creating a dataset from multiple data sources (Excel worksheet, SQL Server table, and a CSV file) that need to be combined and shaped in Power Query before being loaded into the data model. The Import mode is the only one that supports this specific multi-source data mashup and transformation requirement.
Let's analyze why Import mode is the correct choice and why the other modes are not suitable:

Why Option B (Import) is Correct:
Multi-Source Data Mashup:
The Import mode allows you to connect to all three data sources (Excel, SQL Server, CSV) within Power Query Editor, perform necessary transformations (cleaning, merging, appending, calculated columns), and then load the final, integrated dataset into Power BI's in-memory engine (VertiPaq). This is the primary use case for Import mode.

Full Power Query Capabilities:
You have access to the complete set of data transformation tools in Power Query when using Import mode, which is essential for preparing the data from these disparate sources.

Performance:
Once loaded, reports and dashboards are extremely fast because they query the highly optimized, in-memory data model.

Why the Other Options Are Incorrect:
A. DirectQuery:
This mode does not load data into the model. Instead, it sends queries directly to the source database at report runtime. It is designed for a single, supported relational source (like the SQL Server table) and does not support combining data from multiple different source types (like Excel and CSV) into a single model. Power Query transformations are also severely limited in DirectQuery mode.

C. Live connection:
This mode is used to connect to an analysis services model (either Power BI Premium datasets, Azure Analysis Services, or SQL Server Analysis Services). It is not used to create a new dataset from raw source files and a database table. You connect live to an already-built semantic model.

D. Composite:
A composite model blends Import and DirectQuery modes. You might use this if, for example, you imported the Excel and CSV data but kept a very large SQL Server table in DirectQuery. While technically possible, it adds unnecessary complexity. The straightforward and most efficient solution for combining these three relatively small data sources is to use Import mode, which is the default and recommended approach for this scenario.

Reference:
Core Concept:
This question tests the understanding of dataset storage modes in Power BI and selecting the correct mode based on data source types and transformation requirements.

You need to create a relationship in the dataset for RLS.
What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.








Explanation:
To implement Row-Level Security (RLS) effectively, you need a relationship that allows filter context from a table with defined security roles (often a dimension table like Sales Employees) to propagate to related fact tables. The relationship must be from the security table to the table containing the data you want to secure. A "one-to-many" relationship is the most common pattern for this.

Correct Options:

Relationship type: One-to-many
A "one-to-many" relationship is the standard for RLS. You define the security role on the 'one' side (e.g., Sales Employees table). The filter then propagates down to the related rows on the 'many' side (e.g., Orders table), restricting data based on the employee. This direction ensures the security rule filters the transactional data correctly.

Related table: Orders table
The Sales Employees table is the logical security dimension. For RLS, you would create a role that filters the Sales Employees table. To secure order data by employee, you must relate Sales Employees to the Orders table. This allows a filter on an employee to automatically restrict the orders they are associated with, which is the typical business requirement.

Incorrect Options:

Relationship type (One-to-one, Many-to-one, Many-to-many):
One-to-one relationships are rare and not typical for RLS filtering scenarios between dimension and fact tables.

Many-to-one describes the same relationship but viewed from the opposite direction (Orders to Sales Employees); Power BI primarily uses the 'one-to-many' terminology from the filtering side.

Many-to-many relationships require special configuration and can lead to ambiguous filter propagation, making them complex and less reliable for simple, predictable RLS implementations.

Related table (Suppliers table, Order Details table, Customer Details worksheet):

Suppliers table:
Unrelated to securing data by Sales Employee.

Order Details table:
While related to orders, it's typically better to place the RLS filter higher up at the Orders level (or on both). The most direct and common path is from Employee to Orders.

Customer Details worksheet:
This is not a table; "worksheet" suggests an Excel object, not a properly modeled table in Power BI for this context.

Reference:
Microsoft Learn: "Implement row-level security (RLS)" emphasizes creating relationships from the table used in the DAX security filter (the dimension) to the tables you want to secure. The standard model uses one-to-many relationships for predictable filter propagation.

You need to create the Top Customers report.
Which type of filter should you use, and at which level should you apply the filter? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.








Explanation:
This question asks you to configure a "Top N" filter, which dynamically shows only the highest or lowest ranked items based on a measure (like sales amount). You must choose the correct filter category and its application scope within a Power BI report to create a "Top Customers" visualization.

Correct Options:

Filter type: Top N
The "Top N" filter is specifically designed to display a defined number of top (or bottom) items. To show "Top Customers," you would use this filter, set it to "Top" and specify "N" (e.g., 10), and base the ranking on a measure like "Total Sales." This is the exact purpose of this filter type.

Level: Visual
Applying the filter at the Visual level is correct for a "Top Customers" report. This confines the filter's effect to a single visual (like a bar chart or table), ensuring only the top-ranked customers appear in that specific visual without affecting other visuals on the same page.

Incorrect Options:

Filter type (Basic, Advanced):
Basic filters allow filtering by specific values (e.g., Customer = "Contoso") but do not dynamically calculate rankings.

Advanced filters offer complex conditions using "And/Or" logic but cannot inherently perform a Top N ranking based on a measure's value.

Level (Page, Report):
Page level would apply the Top N filter to all visuals on the report page, which is too broad if you only want a single "Top Customers" visual among others showing full data.

Report level would apply the filter to all pages in the report, which is excessively restrictive and not the requirement for a single report focused on top customers.

Reference:
Microsoft Learn documentation on "Filters in Power BI" distinguishes between filter types (Basic, Advanced, Top N) and scopes (Visual, Page, Report, Drillthrough). The Top N filter is designed for ranking, and the Visual level provides the targeted control needed for this scenario.

You need to create the On-Time Shipping report. The report must include a visualization that shows the percentage of late orders.
Which type of visualization should you create?


A.

bar chart


B.

scatterplot


C.

pie chart





A.
  

bar chart



Explanation:
The requirement is to visualize a single, clear metric: the percentage of late orders. This is a part-to-whole comparison, showing the proportion of late orders against on-time orders. The goal is immediate clarity for a key performance indicator (KPI).

Correct Option:

A. Bar Chart
A bar chart is the optimal choice. You can create a clustered column chart with two bars: one for "Late Orders (%)" and one for "On-Time Orders (%)". This allows for a direct, side-by-side comparison of the two complementary percentages, making the late order rate immediately obvious and easy to compare over time if categories like months are added to the axis.

Incorrect Options:

B. Scatterplot
A scatter plot is used to show the relationship between two numerical variables (e.g., order value vs. shipping cost) for many data points. It is ineffective for displaying a single aggregated percentage value and does not clearly communicate a part-to-whole relationship.

C. Pie Chart
While a pie chart can show a part-to-whole relationship, it is less effective for precise comparison than a bar chart. For a critical business metric like late order percentage, a bar chart provides a more accurate and impactful visual comparison, especially when the goal is to track this percentage over different time periods.

Reference:
Microsoft's data visualization best practices often recommend using bar/column charts over pie charts for comparing magnitudes, as the human eye judges length more accurately than angle or area. A bar chart is the standard for clear comparison of categorical data like this.

You need to create a measure that will return the percentage of late orders.
How should you complete the DAX expression? To answer, select the appropriate options
in the answer area.
NOTE: Each correct selection is worth one point.









Explanation:
The goal is to write a DAX measure calculating the percentage of late orders. The provided code framework defines variables for total orders and late orders. The late orders variable must count only rows where the ShippedDate is greater than the RequiredDate. This requires using a function to modify filter context to evaluate this row-by-row condition.

Correct Options:

Function: CALCULATETABLE
CALCULATETABLE is used here to return a table of orders filtered by a condition. It is the correct function to pair with COUNTROWS to count rows from the Orders table that meet a specific filter criterion defined in the subsequent step.

Filter Condition: Orders[ShippedDate] > Orders[RequiredDate]

This is the business logic definition of a "late order": an order that was shipped after its required date. This condition correctly identifies the rows to be counted.

Incorrect Options:

Function (SUM, COUNTX, CALCULATE):

SUM: Used for aggregating a column, not for counting rows based on a table expression.

COUNTX: Iterates over a table but requires an expression to evaluate per row. It could work with a different structure, but CALCULATETABLE is the intended fit for the given code skeleton.

CALCULATE: Modifies filter context for an aggregation, but the structure COUNTROWS(CALCULATE(...)) is not standard. CALCULATETABLE returns a filtered table, which is the correct argument for COUNTROWS.

Filter Condition (Other Options):

Orders[OrderDate] > Orders[RequiredDate]: Compares order date to required date, which defines an impossible order, not a shipping delay.

Orders[ShippedDate] >= Orders[OrderDate]: Ensures an order isn't shipped before it's placed—a data integrity check, not a lateness measure.

Orders[ShippedDate] < Orders[RequiredDate]: Defines on-time or early shipments, the opposite of what is needed.

Reference:
The DAX pattern COUNTROWS( CALCULATETABLE ( Table, FilterCondition ) ) is a standard method for counting table rows that meet specific criteria. The logic for Late Orders aligns with common business definitions in Microsoft's data modeling guidance.


Page 2 out of 25 Pages
Previous