Salesforce-Tableau-Consultant Practice Test Questions

100 Questions


A client wants to see the average number of orders per customer per month, broken down by region. The client has created the following calculated field:
Orders per Customer: {FIXED [Customer ID]: COUNTD([Order ID])}
The client then creates a line chart that plots AVG(Orders per Customer) over MONTH(Order Date) by Region. The numbers shown by this chart are far higher than the customer expects.
The client asks a consultant to rewrite the calculation so the result meets their expectation.
Which calculation should the consultant use?


A. {INCLUDE [Customer ID]: COUNTD([Order ID])}


B. {FIXED [Customer ID], [Region]: COUNTD([Order ID])}


C. {EXCLUDE [Customer ID]: COUNTD([Order ID])}


D. {FIXED [Customer ID], [Region], [Order Date]: COUNTD([Order ID])}





B.
   {FIXED [Customer ID], [Region]: COUNTD([Order ID])}

Explanation đź’ˇ

The client's original calculation, {FIXED [Customer ID]: COUNTD([Order ID])}, correctly calculates the total number of orders for each customer across the entire dataset, regardless of the view's filters or dimensions. This is why the result is higher than expected. When this field is aggregated using AVG over MONTH(Order Date), Tableau is taking the average of these total lifetime orders for all customers who made a purchase in that month, not the average number of orders per customer for that specific month.

To meet the client's expectation of "average number of orders per customer per month, broken down by region," the calculation needs to be rewritten to consider both the month and the region.

The correct approach is to create a calculation that first determines the number of orders per customer within each region. The month-by-month breakdown will then be handled by the view itself.

The formula {FIXED [Customer ID], [Region]: COUNTD([Order ID])} is the key.

FIXED [Customer ID], [Region]: This tells Tableau to calculate the number of distinct orders (COUNTD([Order ID])) for each unique combination of Customer ID and Region. This creates a new column in the data source that contains the total number of orders per customer, scoped to their region.
When this new field is added to the view, which is already broken down by MONTH(Order Date) and Region, the AVG aggregation will correctly calculate the average of these pre-computed values for each month.

Why the other options are incorrect:

A. {INCLUDE [Customer ID]: COUNTD([Order ID])}: An INCLUDE LOD expression includes the specified dimensions in the calculation's granularity while still being affected by the view's filters. While this would consider Customer ID, it would still be aggregated by the dimensions already in the view (MONTH(Order Date) and Region), potentially leading to incorrect or unexpected results because it does not create a fixed, pre-aggregated number of orders per customer. The correct approach is to fix the aggregation at the customer/region level first.
C. {EXCLUDE [Customer ID]: COUNTD([Order ID])}: This calculation would count the distinct orders for each combination of dimensions in the view, excluding Customer ID. This would essentially count the total number of orders for each month and region, which is not what the client wants.
D. {FIXED [Customer ID], [Region], [Order Date]: COUNTD([Order ID])}: This calculation is too granular. It would count the number of orders per customer, per region, per specific order date. Since COUNTD([Order ID]) for a single day would usually be 1 (unless a customer places multiple orders on the same day with different IDs), this calculation would return a value of 1 for most instances, and the average would be around 1, which is not the expected outcome of average orders per customer per month. The correct granularity should be at the customer and region level to count the total orders, and then the view's dimensions (MONTH(Order Date)) should be used to slice the average.

A Tableau Cloud client has requested a custom dashboard to help track which data sources are used most frequently in dashboards across their site.
Which two actions should the client use to access the necessary metadata? Choose two.


A. Connect directly to the Site Content data source within the Admin Insights project.


B. Query metadata through the GraphiQL engine.


C. Access metadata through the Metadata API.


D. Download metadata through Tableau Catalog.





B.
   Query metadata through the GraphiQL engine.

C.
   Access metadata through the Metadata API.

Explanation:

The question asks for a custom dashboard to track data source usage. This requires programmatic access to Tableau's underlying metadata. The standard Admin Insights dashboards are pre-built and not customizable.

B. GraphiQL Engine:
This is an interactive tool within Tableau Cloud/Server that allows admins to explore the metadata schema and build queries for the Metadata API. It is the primary way to design the queries needed for a custom dashboard.
C. Metadata API:
This is the programmable interface that a custom application (like the requested dashboard) would use to automatically retrieve the metadata (e.g., data sources, their connections to workbooks, usage stats) queried via GraphiQL.

Why Others Are Wrong:
A. Admin Insights:
This is a pre-built, static project for admins. It is not a data source you can connect to for building custom dashboards.
D. Tableau Catalog:
Catalog provides lineage and governance features through the UI (like the "Lineage" tab) and the Metadata API, but it is not a direct method for "downloading" bulk metadata for a custom dashboard. The API (option C) is the correct mechanism that powers Catalog's features.

A client has a published data source in Tableau Server and they want to revert to the previous version of the data source. The solution must minimize the impact on users.
What should the consultant do to accomplish this task?


A. Request that a server administrator restore a Tableau Server backup.


B. Delete and recreate the data source manually.


C. Select a previous version from Tableau Server, download it, and republish that data source.


D. Select a previous version from Tableau Server, and then click Restore.





D.
   Select a previous version from Tableau Server, and then click Restore.

Explanation:

Tableau Server supports version history for published data sources and workbooks. If versioning is enabled, users with appropriate permissions can:

View previous versions
Restore a prior version with a single click
Avoid republishing or manual recreation

This method is the least disruptive and most efficient, ensuring minimal impact on users while preserving metadata, permissions, and connections.

❌ Why the other options are less ideal:
A. Restore a Tableau Server backup: This is a drastic measure that affects the entire server—not suitable for reverting a single data source.
B. Delete and recreate manually: Time-consuming and error-prone. It risks breaking dependencies and losing metadata.
C. Download and republish: Better than B, but still requires manual effort and may disrupt linked dashboards or permissions.

đź”— Reference:
Tableau Version History and Restore

A client notices that while creating calculated fields, occasionally the new fields are created as strings, integers, or Booleans. The client asks a consultant if there is a performance difference among these three data types.
What should the consultant tell the customer?


A. Strings are fastest, followed by integers, and then Booleans.


B. Integers are fastest, followed by Booleans, and then strings.


C. Strings, integers, and Booleans all perform the same.


D. Booleans are fastest, followed by integers, and then strings.





B.
   Integers are fastest, followed by Booleans, and then strings.

Explanation:

Why:
In Tableau (and most compute engines), operations on simpler, smaller types are cheaper.

Boolean logic is minimal (true/false), so it evaluates quickest.
Integers are numeric and efficient for arithmetic/comparisons.
Strings require more work (collation, length, encoding), so they’re slowest for comparisons/joins/aggregations.

Tip:
When possible, model flags as Boolean or integer keys instead of strings to improve calc, filter, and join performance.

A client is considering migrating from Tableau Server to Tableau Cloud.
Which two elements are determining factors of whether the client should use Tableau Server or Tableau Cloud? Choose two.


A. Whether or not the client plans to leverage single sign-on (SSO)


B. Whether or not there are large numbers of concurrent extract refreshes


C. Whether or not the client needs the ability to connect to public, cloud-based data sources


D. Amount of data storage used on the client's existing server





B.
   Whether or not there are large numbers of concurrent extract refreshes

D.
   Amount of data storage used on the client's existing server

Explanation:

Why:
B. Concurrent extract refreshes: Tableau Cloud has shared backgrounder capacity and scheduling constraints; heavy/concurrent refresh needs may favor Tableau Server (you control hardware and backgrounders).
D. Data storage usage: Tableau Cloud enforces storage limits per site/tenant, while Tableau Server storage is as large as the infrastructure you provision—so current/expected storage footprint is a determining factor.

Why not the others:
A. SSO is supported on both Cloud and Server.
C. Public, cloud-based data sources can be accessed from both; not a deciding factor.

A client has a pipeline dashboard that takes a long time to load. The dashboard is connected to only one large data source that is an extract.
It contains two calculated fields:
. TOTAL([Opportunities])
· SUM([Value])
It also contains two filters:
. A Relative Date filter on Created Date, a Date field containing values from 5 years ago until today
. A Multiple Values (Dropdown) filter on Account Name, a String field containing 1,000 distinct values
A consultant creates a Performance Recording to troubleshoot the issue, and finds out that the longest-running event is "Executing Query."
Which step should the consultant take to resolve this issue?


A. Replace the Multiple Values (Dropdown) filter with a Multiple Values (Custom List) filter.


B. Replace the Relative Date filter with a Multiple Values (Dropdown) filter on YEAR([Created Date]).


C. Replace the TOTAL([Opportunities]) calculation with a Grand Total.


D. Replace SUM([Value]) with WINDOW_SUM([Value]).





B.
   Replace the Relative Date filter with a Multiple Values (Dropdown) filter on YEAR([Created Date]).

Explanation:

The Performance Recording shows "Executing Query" as the bottleneck. This points to an inefficient filter that forces Tableau to scan a large portion of the extract.

The Problem:
A Relative Date filter (e.g., "Last 3 Months") is a complex, non-indexed filter. To apply it, Tableau must evaluate every single row in the "Created Date" column (containing 5 years of data) to see if it meets the relative condition. This is computationally expensive on a large dataset.
The Solution:
Replacing it with a simple filter on YEAR([Created Date]) is much more efficient. This creates a discrete, integer-based filter that can be optimized by Tableau's query engine, drastically reducing the query execution time.

Why Others Are Wrong:
A. Changing Dropdown to Custom List:
This is a UI change that has no impact on query performance.
C. Replacing TOTAL() with Grand Total:
TOTAL() is a table calculation that computes after the query. It is not the cause of the slow "Executing Query" step.
D. Replacing SUM() with WINDOW_SUM():
WINDOW_SUM() is also a table calculation that operates on the results of the query. It does not affect the initial data retrieval speed.

An online sales company has a table data source that contains Order Date. Products ship on the first day of each month for all orders from the previous month.
The consultant needs to know the average number of days that a customer must wait before a product is shipped.
Which calculation should the consultant use?


A. Calc1: DATETRUNC ('month', DATEADD('month', 1, [Order Date]))
Calc2: AVG(DATEDIFF ('week', [Order Date], [Calc1]))


B. Calc1: DATETRUNC ('month', DATEADD ('month', 1, [Order Date]))
Calc2: AVG(DATEDIFF ('day', [Order Date], [Calc1]))


C. Calc1: DATETRUNC ('day', DATEADD('week', 4, [Order Date]))
Calc2: AVG([Order Date] - [Calc1])


D. Calc1: DATETRUNC ('day', DATEADD ('day', 31, [Order Date]))
Calc2: AVG ([Order Date] - [Calc1])





B.
   Calc1: DATETRUNC ('month', DATEADD ('month', 1, [Order Date]))
Calc2: AVG(DATEDIFF ('day', [Order Date], [Calc1]))

In what way does View Acceleration improve performance?


A. By optimizing the performance of views built only on extract-based data sources


B. By precompiling and fetching workbook data in a background process


C. By enhancing the rendering speed of visuals, such as drawing shapes and maps


D. By improving the performance of views that contain long-running queries with transient functions





B.
  By precompiling and fetching workbook data in a background process

âś… Explanation

View Acceleration in Tableau Server/Cloud improves performance by precomputing the query results of a view in the background. When a user opens the accelerated view, Tableau loads the pre-fetched, cached data, making the view render significantly faster.
This feature is especially helpful for views with heavy queries, large datasets, or complex calculations.

Why the other options are incorrect
A. Only extract-based sources → ❌
View Acceleration works for both extract and live data sources, as long as the query results can be cached.

C. Rendering speed of visuals → ❌
Acceleration focuses on data retrieval performance, not on graphical rendering like shapes or maps.

D. Long-running queries with transient functions → ❌
Acceleration skips caching for views using non-cacheable, non-deterministic functions (e.g., NOW(), TODAY(), RANDOM(), USERNAME()).

📚 Reference
Tableau Official Documentation — Optimize Workbook Performance > View Acceleration
(Describes background precomputation and caching behavior)

A customer plans to do an in-place upgrade of their single node Tableau Server from 2023.1 to the most recent version.
What is the correct sequence to prepare for an in-place upgrade?


A. * In the production environment:
* Disable scheduled tasks.
* Uninstall Tableau Server 2023.1.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.


B. * In the production environment:
* Disable scheduled tasks.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.


C. * In a non-production environment:
* Install the most recent version of Tableau Server.
* Back up the existing production environment.
* Restore settings and backup into the non-production environment.
* Confirm everything works as expected and test new features.
* Redirect user traffic from the production environment to the non-production environment.


D. * In a non-production environment:
* Clone a copy of existing production environment to create a VM snapshot.
* Restore the VM snapshot into the non-production environment.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.
* Redirect user traffic from the production environment to the non-production environment.





B.
  * In the production environment:
* Disable scheduled tasks.
* Run the upgrade script for the most recent version of Tableau Server.
* Confirm everything works as expected and test new features.

Explanation:

For an in-place upgrade of a single-node Tableau Server (from 2023.1 to the latest version, such as 2025.3 as of November 2025), the process is performed directly on the production server to minimize downtime and avoid the need for traffic redirection or a separate environment. This method installs the new version side-by-side with the existing one, then uses an upgrade script to migrate configurations, data, and settings seamlessly. Key steps include:
Disable scheduled tasks: Before upgrading, pause jobs like extract refreshes or subscriptions via the TSM CLI (e.g., tsm maintenance pause-jobs) to prevent interruptions or data inconsistencies during the process.
Run the upgrade script: After installing the new version's setup program (which detects and prepares the existing installation), execute tsm maintenance upgrade --file as an administrator. This handles the core migration, including repository data and services.
Confirm and test: Restart services with tsm start, then validate functionality (e.g., via the Upgrade Server dashboard for feature impacts) and test critical dashboards, data sources, and new capabilities.
This sequence ensures a controlled upgrade with built-in checkpointing for rollback if needed. The process typically takes 1–2 hours for a single node, depending on data volume.

Why the other options are incorrect:
A: Uninstalling the old version before running the upgrade script is invalid—Tableau's process requires the existing version to remain in place until the script migrates everything. Uninstalling prematurely would cause data loss or require a full restore.

C: This describes a fresh install and restore approach in a non-production environment (e.g., cloning via backup/restore), not an in-place upgrade. It involves redirecting traffic, which adds complexity and downtime unsuitable for in-place scenarios.

D: Cloning via VM snapshots and upgrading in non-production is a blue/green deployment for zero-downtime upgrades or major OS changes, not a standard in-place process on the production node. It also requires traffic redirection, which contradicts the in-place intent.

Reference:
Tableau Help: Upgrading Tableau Server (Single-Node)
Best Practices: Upgrade Planning Checklist
2025 Release Notes: What's New in Tableau Server (includes upgrade impact filters)

During a Tableau Cloud implementation, a Tableau consultant has been tasked with implementing row-level security (RLS). They have already invested in implementing RLS within their own database for their legacy reporting solution. The client wants to know if they will be able to leverage their existing RLS after the Tableau Cloud implementation.
Which two requirements should the Tableau consultant share with the client? Choose two.


A. The Tableau Cloud username must exist in the database.


B. Both live and extract connections can be used.


C. Only live data connections can be used.


D. The RLS in database option must be configured in Tableau Cloud.





A.
  The Tableau Cloud username must exist in the database.

C.
   Only live data connections can be used.

âś… Explanation
If a customer already uses row-level security (RLS) inside their database, Tableau Cloud can leverage that same RLS only when using a live connection and only if the database can authenticate/identify the Tableau Cloud user.

To reuse existing database-level RLS, two requirements must be met:

âś” A. The Tableau Cloud username must exist in the database.
Correct.
Database-level RLS typically relies on a field such as username, email, or user ID to filter data. For Tableau Cloud to pass the user identity correctly, the database must recognize the user.

This is usually done via:
- SAML / OAuth passthrough
- Initial SQL (passing USERNAME() into the DB)
- Database mapping tables using Tableau username/email

âś” C. Only live data connections can be used.
Correct.
Tableau Cloud cannot embed database-managed RLS into extracts, because extracts store data after the security filter is applied.

To reuse database-side RLS:
You must use live connections so the database can apply security at query time.

❌ Why the others are incorrect
B. Both live and extract connections can be used.
Incorrect—extracts cannot leverage dynamic database RLS.

D. The RLS in database option must be configured in Tableau Cloud.
Incorrect—there is no such setting in Tableau Cloud.
RLS is defined and enforced in the database, not in Tableau Cloud.

A client wants to see data for only the last day in a dataset and the last day is always yesterday. The date is represented with the field Ship Date.
The client is not concerned about the daily refresh results. The volume of data is so large that performance is their priority. In the future, the client will be able to move the calculation to the underlying database, but not at this time. The solution should offer the best performance.
Which approach should the consultant use to produce the desired results?


A. Filter MONTH/DAY/YEAR on [Ship Date] field and use an option to filter to the latest date value when the workbook opens.


B. Filter on calculation [Ship Date]=TODAY()-1.


C. Filter on Ship Date field using the Yesterday option.


D. Filter on calculation [Ship Date]={MAX([Ship Date])}.





B.
   Filter on calculation [Ship Date]=TODAY()-1.

Explanation:

Correct Approach: Use a Filter with the Calculation [Ship Date] = TODAY() - 1 (Option B)
This is the highest-performing solution that fully satisfies the client’s requirements. The calculation TODAY()-1 is a simple, deterministic, row-level Boolean test that always resolves to yesterday’s date, regardless of when the extract refreshes or the workbook is opened. Because it contains no aggregate functions and no LOD expressions, Tableau can push this filter all the way down into the Hyper extract creation process as an extract filter. When the extract is built or refreshed, only rows where Ship Date equals yesterday are physically written into the .hyper file. This dramatically reduces the extract size and makes every subsequent query (including dashboard load, filter actions, and mark rendering) lightning-fast—even on datasets with billions of rows. Since the client explicitly prioritizes performance over everything else and is comfortable with daily refreshes, this approach delivers the best possible speed today while remaining easy to replace later when the logic moves to the database.

Why Option A Is Incorrect and Much Slower
Option A suggests breaking Ship Date into MONTH/DAY/YEAR components and then using a relative-date or “latest date value when workbook opens” filter. This forces Tableau to scan the entire dataset on every single query to determine what the latest date is before applying the filter. On a massive dataset, this extra scan adds seconds or even minutes to every dashboard load. It also prevents the filter from becoming a true extract filter, so the full historical dataset remains in the extract, wasting storage and slowing down rendering.

Why Option C Is Close but Still Not the Recommended Answer
Option C uses the built-in “Yesterday” relative-date filter on the Ship Date field. Internally, Tableau translates this to something very similar to TODAY()-1, and performance is excellent in most cases. However, the Analytics-Con-301 exam (and Tableau’s own best-practice documentation) consistently favors the explicit calculation TODAY()-1 as the answer for performance-critical scenarios because it gives the author full control and guarantees the filter can be converted into a data-source or extract filter without ambiguity. Many real-world implementations also prefer the calculation form for clarity in version control and future maintenance.

Why Option D Performs Poorly on Large Data
Option D uses a fixed LOD expression {MAX([Ship Date])} to find the single latest date in the data and then filters to that date. While this would technically show only the most recent day, the LOD forces Tableau to run a separate subquery to compute the global maximum before applying the row-level filter. On very large extracts this subquery adds noticeable overhead, and—most importantly—it prevents the filter from being materialized as an extract filter during refresh. The result is a significantly larger extract and slower query performance compared to the simple row-level TODAY()-1 test, making it the wrong choice when raw speed is the top priority.

In summary, for a huge dataset where the client needs exactly yesterday’s data and performance is non-negotiable, the consultant must implement an extract or data-source filter using the calculation [Ship Date] = TODAY() - 1. This is the officially recommended, exam-correct, and real-world fastest solution.

A client wants to grant a user access to a data source hosted on Tableau Server so that the user can create new content in Tableau Desktop. However, the user should be restricted to seeing only a subset of approved data.
How should the client set up the filter before publishing the hyper file so that the Desktop user follows the same row-level security (RLS) as viewers of the end content?


A. Data Source Filter


B. Context Filter


C. Apply Filter to All Using Related Data Sources


D. Extract Filter





A.
   Data Source Filter

Explanation:

The goal is to ensure that a specific user, when connecting from Tableau Desktop, is permanently restricted to seeing only a predefined subset of data. This security filter must be inherent to the data source itself and cannot be something the user can modify or bypass in Desktop.

Here’s why a Data Source Filter is the correct and only robust choice for this scenario:

Embedded in the Data Source Definition: A Data Source Filter is applied at the connection level and becomes a fundamental part of the data source's definition. When this filtered data source is published to Tableau Server, the filter is preserved.

Enforced in Tableau Desktop: When a user in Tableau Desktop connects to this published data source, the Data Source Filter is applied immediately and automatically. The user cannot see, modify, or remove this filter. They can only build workbooks on top of the already-filtered dataset.

Consistency with End-Content Viewers: Because the same published data source is used to create workbooks and is then used by viewers on Tableau Server, the RLS is consistent. Both the content creator (in Desktop) and the final consumer (on Server) see the exact same, security-trimmed view of the data.

Why the other options are incorrect:

B. Context Filter: A context filter is a worksheet-level filter used for performance optimization. It is part of a workbook's specific view and is not part of the data source definition. A user in Tableau Desktop can easily modify or remove a context filter, so it provides no reliable security.

C. Apply Filter to All Using Related Data Sources: This is an action within a workbook that applies a filter across multiple sheets. It is a dashboard interaction feature and has nothing to do with defining a secure data source for publishing.

D. Extract Filter: While an extract filter does create a subset of data, it is applied during the creation of a .hyper extract file. The key distinction is its behavior after publishing:

If you publish an extract to Server, the filter is "baked in" and the user in Desktop would see the subset.
However, the question specifies the user will "create new content in Tableau Desktop." If the user connects to the published data source and creates a new extract locally in Desktop, they could potentially configure the extract filter differently, bypassing the intended security. A Data Source Filter is more secure because it governs both live connections and any extracts created from it on Server.

Key Concept:
Feature: Data Source Filters for Row-Level Security (RLS).
Core Concept: To enforce a data-level security policy that is consistent for both content creators (in Tableau Desktop) and consumers (on Tableau Server), the filter must be applied at the data source level. This embeds the security directly into the connection, making it immutable by the end-user and ensuring it is the foundation for all workbooks built from that data source.


Page 2 out of 9 Pages
Previous