C_BW4H_2505 Practice Test Questions

80 Questions


Which layer of the layered scalable architecture (LSA++) of SAP BW/4HANA is designed as the main storage for harmonized consistent data?


A. Open Operational Data Store layer


B. Data Acquisition layer


C. Flexible Enterprise Data Warehouse Core layer


D. Virtual Data Mart layer





C.
  Flexible Enterprise Data Warehouse Core layer

Explanation:

The Flexible Enterprise Data Warehouse (EDW) Core layer is the central layer in LSA++ where harmonized, cleansed, and consolidated data from different sources is stored. It ensures consistency and serves as the main source for reporting and analytics.

Why the other options are incorrect

A. Open Operational Data Store (ODS) layer: This layer stores detailed transactional data close to the source system, mainly for operational reporting, not for harmonized consistent data.

B. Data Acquisition layer:
This is responsible for integrating and staging raw data from source systems, not for storing harmonized data.

D. Virtual Data Mart layer:
This layer is used for delivering data to specific reporting needs and analytics, often as a semantic view, without physically storing harmonized data.

References:

SAP Help Portal: “SAP BW/4HANA: Layered Scalable Architecture (LSA++)” – describes the EDW Core layer as the central storage for harmonized data.
SAP Press, “SAP BW/4HANA and the Future of Data Warehousing”, Chapter 3: explains the role of each LSA++ layer in storing and processing data.

You created an Open ODS View on an SAP HANA database table to virtually consume the data in SAP BW /4HANReal-time reporting requirements have now changed you are asked to persist the data in SAP BW /4HANA.Which objects are created when using the "Generate Data Flow" function in the Open ODS View editor? Note: There are 3 correctanswers to this question.


A. DataStore object (advanced)


B. SAP HANA calculation view


C. Transformation


D. Data source


E. CompositeProvider





A.
  DataStore object (advanced)

C.
  Transformation

E.
  CompositeProvider

Explanation:

The "Generate Data Flow" function in the Open ODS View editor is used when you need to materialize virtually exposed data from an Open ODS View into a physically persisted BW/4HANA object for historical storage, transformation, or reporting performance.

Here’s what happens when you execute this function:
An Advanced DataStore Object (ADSO) is created to physically store the data in BW/4HANA.
A Transformation is created between the Open ODS View (source) and the new ADSO (target).
A CompositeProvider is created that includes the new ADSO, allowing it to be used for query reporting.
This creates a complete, modeled data flow from the virtual source to a persisted data target while maintaining the original virtual view.

Why the Other Options Are Incorrect:

B. SAP HANA calculation view
– This is not created by this function in the BW/4HANA context. The Open ODS View itself is built on a HANA object (like a table or view), but the "Generate Data Flow" creates BW modeling objects, not additional HANA calculation views.

D. Data source
– A data source is an extraction object used for bringing data from a source system into BW. The Open ODS View is already the source in this scenario. The "Generate Data Flow" creates downstream BW objects, not a new extraction data source.

Reference
This function follows the "virtual then persist" modeling pattern in BW/4HANA.
It allows you to start with a quick virtual data consumption via an Open ODS View for real-time needs, and later persist the data for history, complex transformations, or performance without redesigning the model.

Which data deletion options are offered for a Standard DataStore Object (advanced)?Note: There are 3 correctanswers to this question.


A. Selective deletion of data


B. Selective deletion including data of subsequent targets


C. Request-based data deletion


D. Deletion of data from all tables


E. Deletion of all data from active table only





A.
  Selective deletion of data

C.
  Request-based data deletion

D.
  Deletion of data from all tables

Explanation:

A. Selective deletion of data
This is a core administrative function for aDSOs. It allows you to delete specific records based on filter criteria (e.g., deleting all data for a specific Year or Company Code). In a Standard aDSO, this typically targets the Active Data table. Note that while powerful, it does not automatically update subsequent targets (the "delta" isn't sent forward as a deletion unless specific steps are taken).

C. Request-based data deletion
Standard aDSOs support request management through the RSPM (Request Status and Process Management) framework. You can delete entire load requests.
If the requ
est is still in the Inbound Table (not activated), it is simply removed. If it has been Activated, BW/4HANA can perform a "Rollback" by using the Change Log to reverse the values in the Active Table and then deleting the request.

D. Deletion of data from all tables
In the BW/4HANA Cockpit or Modeling Tools, there is a "Delete Data" function (often referred to as "Clean Up" or "Delete All") that allows you to wipe the aDSO entirely. This action clears the Inbound Table, Active Data Table, and Change Log simultaneously. It is the fastest way to reset a provider during development or testing.

Why the others are incorrect:

B. Selective deletion including data of subsequent targets:
This is not a standard automated feature of the aDSO deletion tool. While you can manually coordinate deletions across a flow, the aDSO selective deletion tool itself only acts on the current object.

E. Deletion of all data from active table only:
While you can selectively delete data that happens to be in the active table, "Deletion of all data from active table only" is not offered as a standalone "one-click" standard option that ignores the other tables (Inbound/Change Log). Standard "Delete All" operations target the entire object structure to maintain consistency.

What are benefits of using an InfoSource in a data flow?Note: There are 2 correctanswers to this question.


A. Splitting a complex transformation into simple parts without storing intermediate data


B. Providing the delta extraction information of the source data


C. Realizing direct access to source data without storing them


D. Enabling a data transfer process (DTP) to execute multiple sequential transformations





A.
  Splitting a complex transformation into simple parts without storing intermediate data

D.
  Enabling a data transfer process (DTP) to execute multiple sequential transformations

Explanation:

A. Splitting a complex transformation into simple parts without storing intermediate data
✔ Correct. An InfoSource allows you to break down complex transformations into smaller, manageable steps. This modular approach makes it easier to design, test, and maintain transformations without needing to persist intermediate results in separate objects.

B. Providing the delta extraction information of the source data
❌ Incorrect. Delta extraction logic is handled at the DataSource level, not the InfoSource. The InfoSource is more about structuring transformations, not managing extraction modes.

C. Realizing direct access to source data without storing them
❌ Incorrect. Direct access to source data is achieved via Open ODS Views or DataSources, not InfoSources. InfoSources are transformation-layer objects, not extraction-layer objects.

D. Enabling a data transfer process (DTP) to execute multiple sequential transformations
✔ Correct. InfoSources act as a virtual layer between DataSources and targets (like ADSOs or InfoObjects). They allow a single DTP to chain multiple transformations together, which is especially useful when you want to reuse logic across different targets.

Reference:
SAP official documentation on BW/4HANA InfoSources highlights their role in structuring transformations and enabling sequential processing rather than extraction or direct access. You can find more details in SAP Help Portal – BW/4HANA InfoSource (help.sap.com in Bing).

Which are purposes of the Open Operational Data Store layer in the layered scalable architecture (LSA++) of SAP BW/4HANA? Note: There are 2 correctanswers to this question.


A. Harmonization of data from several source systems


B. Transformations of data based on business logic


C. Initial staging of source system data


D. Real-time reporting on source system data without staging





C.
  Initial staging of source system data

D.
  Real-time reporting on source system data without staging

Explanation:

The ODS layer acts as a bridge between source systems and the EDW Core. It allows initial staging of detailed transactional data and supports real-time reporting on source system data without the need for consolidation, providing operational insights.

Why the other options are incorrect :

A. Harmonization of data from several source systems: Harmonization occurs in the EDW Core layer, where data from multiple sources is cleansed and consolidated, not in the ODS.

B. Transformations of data based on business logic: Complex transformations and business logic are typically applied in the EDW Core layer or in transformations between ODS and EDW, not in the ODS itself.

References :
SAP Help Portal: “SAP BW/4HANA – Layered Scalable Architecture (LSA++)”, section on ODS: details initial staging and operational reporting. SAP Press, “SAP BW/4HANA: An Introduction”, Chapter 3: describes ODS as the operational layer for staging and near-real-time reporting.

What are the possible ways to fill a pre-calculated value set (bucket)? Note: There are 3 correctanswers to this question.


A. By using a BW query (update value set by query)


B. By accessing an SAP HANA HDI Calculation View of data category Dimension


C. By using a transformation data transfer process (DTP)


D. By entering the values manually


E. By referencing a table





B.
  By accessing an SAP HANA HDI Calculation View of data category Dimension

C.
  By using a transformation data transfer process (DTP)

D.
  By entering the values manually

Explanation:

A Pre-calculated Value Set (often called a Bucket) in BW/4HANA is a reusable object for value grouping and classification. It's populated with a fixed set of values that define the "buckets" for subsequent use in transformations or queries.

There are three primary methods to populate it:

B. By accessing an SAP HANA HDI Calculation View of data category Dimension
This is a virtual fill method. The bucket definition (value set) is based directly on a dimension-type SAP HANA calculation view, allowing for real-time consumption of the grouping logic without persisting it in BW.

C. By using a transformation data transfer process (DTP)
This is a data transfer method. You create a transformation with the value set as the target and a source InfoProvider (like an ADSO or InfoObject). A DTP then executes the data flow to populate the value set with data from the source.

D. By entering the values manually
This is the manual maintenance method. In the value set editor, you can manually create, edit, and delete the individual value intervals (buckets) directly.

Why the Other Options Are Incorrect:

A. By using a BW query (update value set by query)
– There is no standard function to populate a pre-calculated value set directly via a BW query. Queries are for reporting, not for loading or maintaining master data objects like value sets.

E. By referencing a table
– While you can load data from a database table via a transformation, the option "by referencing a table" is too vague and not a direct, standalone method. The correct path is to use a transformation/DTP where the table would be the source (e.g., via an Open ODS View). It is not a distinct, direct fill method like the three correct ones listed.

Reference:
Pre-calculated value sets are a master data-like object used for fixed value groupings (e.g., age groups, revenue categories).

Which SAP BW/4HANA objects can be used as sources of a data transfer process (DTP)? Note: There are 2 correctanswers to this question.


A. DataStore Object (advanced)


B. Open ODS view


C. InfoSource


D. CompositeProvider





A.
  DataStore Object (advanced)

C.
  InfoSource

Explanation:

A. DataStore Object (advanced)
✔ Correct. An ADSO (Advanced DataStore Object) is a central persistence layer in BW/4HANA. It can serve as both a source and a target in a DTP. When used as a source, the DTP reads data from the ADSO to move it downstream (e.g., into another ADSO, InfoObject, or CompositeProvider).

B. Open ODS view
❌ Incorrect. Open ODS Views are designed for virtual access to external data sources. They are not valid sources for a DTP because DTP requires persisted BW objects. Instead, Open ODS Views are consumed directly in queries or CompositeProviders.

C. InfoSource
✔ Correct. An InfoSource is a logical layer that allows you to connect multiple DataSources to multiple targets. It can be used as a source in a DTP, enabling you to execute transformations and route data flexibly.

D. CompositeProvider
❌ Incorrect. A CompositeProvider is a virtual modeling object used for reporting and combining data from ADSOs, InfoObjects, or Open ODS Views. It is not a source for a DTP because DTP is about data movement/persistence, not reporting.

Reference:
SAP BW/4HANA documentation confirms that DTP sources can be ADSOs, InfoSources, and other persistent BW objects, but not virtual reporting objects like CompositeProviders or Open ODS Views. See SAP Help Portal: Data Transfer Process (DTP) in BW/4HANA (help.sap.com) (help.sap.com in Bing).

You want to create an HD! Calculation View (data category Dimension) and integrate it into an HDI Calculation View (data category Cube with Star Join) of the same HDI container.What is the first required step you need to take?


A. Create and build the HDI Calculation View (data category Dimension).


B. Create and build the HDI Calculation View (data category Cube with Star Join).


C. Create a synonym for the HDI Calculation View (data category Cube with Star Join).


D. Create a synonym for the HDI Calculation View (data category Dimension).





A.
  Create and build the HDI Calculation View (data category Dimension).

Explanation:

Before a Cube (Star Join) Calculation View can reference a Dimension Calculation View, the Dimension view must exist and be activated. Only after the Dimension is built can it be included as a star join in the Cube view.

Why the other options are incorrect

B. Create and build the HDI Calculation View (Cube with Star Join): The Cube view depends on the Dimension view; creating it first would fail due to missing dependencies.

C. Create a synonym for the Cube with Star Join: Synonyms are only required for cross-container or external access; both views are in the same HDI container, so no synonym is needed first.

D. Create a synonym for the Dimension: Synonyms are only required for access from outside the container; within the same HDI container, the Cube can directly reference the Dimension view.

References:

SAP Help Portal: “SAP HANA Modeling – Calculation Views” – explains that Dimension views must be activated before being used in Star Join Cube views.
SAP Notes: HDI Container modeling best practices: Dimension views provide reusable structures for Cube/Fact views within the same container.

You create a DataStore object (advanced) using the "Data Mart DataStore Object" modeling property.Which behaviors are specific to this modeling property?Note: There are 2 correctanswers to this question.


A. The records are treated as if all characteristics are in the key.


B. Reporting is done based on a union of the inbound and active tables.


C. Query results are shown only when data has been activated.


D. The change log table will be filled only after data activation.





A.
  The records are treated as if all characteristics are in the key.

B.
  Reporting is done based on a union of the inbound and active tables.

Explanation:

The "Data Mart DataStore Object" modeling property in an Advanced DSO (ADSO) is a specialized setting designed for staging and intermediate data storage within a data flow, particularly where you need to report on the data before activation.

Here are the two key behaviors this property enables:

A. The records are treated as if all characteristics are in the key.
This means no aggregation or consolidation happens during activation. Every record in the inbound table is treated as unique. This is crucial for data mart scenarios where you need to preserve every detail from the source for interim reporting or further processing.

B. Reporting is done based on a union of the inbound and active tables.
This is the defining feature of a Data Mart ADSO. A query on a CompositeProvider that includes this ADSO will show a real-time union of:
New data still sitting in the inbound table (not yet activated).
Historical data already in the active (reporting) table.
This provides near real-time reporting capabilities on the data flow.

Why the Other Options Are Incorrect:

C. Query results are shown only when data has been activated.
This is false and actually describes the behavior of a standard ADSO without the Data Mart property. The whole purpose of the Data Mart property is to allow reporting before activation via the union behavior.

D. The change log table will be filled only after data activation.
This is false. The change log table is filled during the activation request, not after. It records the delta (changes) that are moved from the inbound table to the active table. This behavior is consistent for all ADSO types where the change log is enabled and is not specific to the Data Mart property.

Reference:

A Data Mart ADSO acts as both a persistent staging area and a reportable object. It bridges the gap between ETL processing and business reporting.

Which features of an SAP BW/4HANA InfoObject are intended to reduce physical data storage space? Note: There are 2 correctanswers to this question.


A. Reference characteristic


B. Transitive attribute


C. Compounding characteristic


D. Enhanced master data update





A.
  Reference characteristic

B.
  Transitive attribute

Explanation:

A. Reference characteristic
A Reference Characteristic (e.g., using 0COSTCENTER as a template for a custom characteristic ZRESP_CC) does not have its own master data or text tables. Instead, it points to the tables of the referenced InfoObject.

Storage Benefit: Since it reuses the base InfoObject's master data, texts, and hierarchies, no additional physical storage is needed for these records.

Use Case:
Ideal when you have different business roles for the same data (like "Sold-to Party" and "Ship-to Party" both referencing "Customer").

B. Transitive attribute
A Transitive Attribute allows you to access an attribute of an attribute (e.g., Employee -> Department -> Manager) without physically storing the "Manager" directly on the "Employee" record.

Storage Benefit: It acts as a "virtual join" at runtime. You do not need to load and store the manager's ID in the Employee’s master data table; the system simply navigates from Employee to Department and then reads the Department's attributes to find the Manager.
SAP BW/4HANA Feature: This is part of the "LSA++" philosophy of reducing redundancy.

Why the others are incorrect:

C. Compounding characteristic:
Compounding actually increases the key length and complexity of the physical tables. For example, compounding "Cost Center" to "Controlling Area" ensures uniqueness, but it forces the system to store both keys in every record where that Cost Center is used. It is a logical requirement for uniqueness, not a storage-saving feature.

D. Enhanced master data update:
This is a performance and management feature introduced in later versions of BW/4HANA (supporting parallel processing, request-based loading, and rollbacks for master data). While it makes data management more efficient, its primary goal is speed and reliability of the update process, not the reduction of physical disk space.

Which objects values can be affected by the key date in a BW query? Note: There are 3 correctanswers to this question.


A. Display attributes


B. Basic key figures


C. Time characteristics


D. Hierarchies


E. Navigation attributes





A.
  Display attributes

D.
  Hierarchies

E.
  Navigation attributes

Explanation:

A. Display attributes
✔ Correct. If an attribute is time-dependent (e.g., an employee’s department assignment changes over time), the key date determines which value is shown in the query.

B. Basic key figures
❌ Incorrect. Key figures (like sales amount, revenue, quantity) are not directly affected by the key date. They are transactional values and are aggregated based on filters, not validity periods.

C. Time characteristics
❌ Incorrect. Time characteristics (like calendar year, fiscal period) are structural elements. They are not influenced by the key date; instead, they are used to slice or filter data.

D. Hierarchies
✔ Correct. Time-dependent hierarchies (e.g., organizational structures that change over time) are evaluated based on the key date. The query shows the hierarchy valid at that date.

E. Navigation attributes
✔ Correct. Like display attributes, time-dependent navigation attributes (e.g., region of a customer valid at a certain date) are controlled by the key date.

Reference:
SAP BW/4HANA Query Designer documentation explains that the key date affects time-dependent attributes and hierarchies, ensuring queries reflect the correct values valid at that point in time. See SAP Help Portal: Key Date in Queries (help.sap.com in Bing) (bing.com in Bing).

Why is the start process a special type of process in a process chain?Note: There are 2 correctanswers to this question.


A. Only one start process is allowed for each process chain.


B. Itcan be a successor of another process.


C. It is the only process that can be scheduled without a predecessor.


D. It can be left out when the Process Chain is embedded in a meta chain.





A.
  Only one start process is allowed for each process chain.

C.
  It is the only process that can be scheduled without a predecessor.

Explanation:
The start process initiates the execution of a process chain. Each process chain can have only one start process, and it is the only process that can be scheduled independently, without requiring a predecessor process to trigger it.

Why the other options are incorrect:

B. It can be a successor of another process: This is incorrect because the start process is always the first process and cannot have predecessors.

D. It can be left out when the process chain is embedded in a meta chain: Incorrect because even in meta chains, the start process is required to trigger the embedded chain; it cannot be omitted.

References
SAP Help Portal: “SAP BW/4HANA Process Chains – Overview and Best Practices” – describes the start process as mandatory, unique, and schedulable without predecessors.
SAP Press, “Data Warehousing with SAP BW/4HANA”, Chapter 8: highlights the special role of the start process in initiating process chains.


Page 1 out of 7 Pages