E_S4HCON2023 Practice Test Questions

79 Questions


You are executing a standard SUM procedure to update an SAP system using main configuration mode "Standard". When will the shadow system be stopped for the last time?

Please choose the correct answer.


A. Before the downtime starts


B. At the end of the downtime


C. As part of the cleanup process


D. Early in the downtime





A.
  Before the downtime starts

Explanation:

In the Standard mode of SUM, the process is explicitly designed to separate work into downtime-minimized (PREPARE) and downtime (EXECUTE) phases. The shadow system is a technical clone used exclusively during the PREPARE phase to perform repository conversion tasks while the productive system remains operational.

The shadow system is stopped immediately after completing the PREPARE phase and before the productive system shutdown begins. This is a defined transition point: once the shadow system's work is validated, it is decommissioned, and only then does the actual business downtime start with the production system stop.

Why other options are incorrect:

B. At the end of the downtime:
Incorrect because the shadow system is dismantled before downtime begins and plays no role during EXECUTE phase activities.

C. As part of the cleanup process:
Incorrect because cleanup occurs post-update, after successful restart. The shadow system's resources are released much earlier.

D. Early in the downtime:
Incorrect because downtime is defined as starting when the productive system stops. The shadow system must already be stopped before this occurs; they do not coexist during downtime.

Reference:
This follows SAP's standard SUM procedure as documented in SAP Help Portal: "Phases of the SUM Procedure" and SAP Note 2133366 - "SUM guide for SAP S/4HANA." The architecture ensures the shadow system executes all possible tasks during uptime, and its termination triggers the controlled handover to the downtime phase, minimizing business disruption.

You are performing an upgrade of an SAP ECC development system with SUM. What is the earliest point when the upgrade is considered complete and you can allow all dialog users to log on to the system?

Please choose the correct answer.


A. After the SUM roadmap step "Execution" is finished and before "Postprocessing"


B. During the SUM roadmap step "Postprocessing", after the post procedure steps are complete


C. During the SUM roadmap step "Postprocessing", in parallel with post procedure steps


D. After the SUM roadmap step "Postprocessing" and post procedure steps are finished





D.
  After the SUM roadmap step "Postprocessing" and post procedure steps are finished

Explanation:

The SUM (Software Update Manager) upgrade process follows a strict roadmap. The Postprocessing phase includes critical finalization tasks that must be completed before the system is considered fully stable for all production-like activities, including general dialog user access.

Only after Postprocessing is entirely finished — including essential steps like running SAPup or post-upgrade jobs, finalizing customizing, and verifying system consistency — is the system declared ready for unrestricted use. Allowing users on earlier risks exposing them to an incomplete or unstable system state.

Why other options are incorrect:

A. After "Execution" and before "Postprocessing":
Incorrect because the Execution phase ends with a technical restart, but crucial post-upgrade activities (SPDD/SPAU adjustments, upgrade-specific jobs) are still pending. The system is not functionally complete.

B. During "Postprocessing", after post procedure steps are complete:
This is semantically the same as option D, but the wording of D is more precise and complete in referring to both the roadmap step and its procedural tasks. The standard SUM documentation defines completion at the end of this phase.

C. During "Postprocessing", in parallel with post procedure steps:
Incorrect and unsafe. Postprocessing steps often require system resources and exclusive access; allowing users during this can cause conflicts, errors, or inconsistent data.

Reference:
SAP Help Portal: "Software Update Manager - Procedure" and SAP Note 2133366 (SUM Guide) state that system handover to productive use occurs after successful completion of the Postprocessing phase. This is when the upgrade checklist is fully executed, and the system is in a consistent, supported state for all users.

The SUM procedure stops with an error during uptime. How can you identify the current phase, the one in which SUM encountered an error?

There are 2 correct answers to this question.


A. Check the most recent entries shown in file sapevt.trc


B. Check the system log (transaction SM21) of the SAP system


C. Check the messages shown by the SUM UI


D. Check the most recent entries in SAPup.log





C.
  Check the messages shown by the SUM UI

D.
  Check the most recent entries in SAPup.log

Explanation:

When SUM stops with an error, the primary diagnostic sources are its own logs and interface, which are specifically designed to track its phase-specific progress and failures.

C. SUM UI:
The SUM graphical interface (or console) displays the current roadmap phase and step at the top of its screen. When an error occurs, it also shows a detailed error message, often with a direct link to the relevant log file and the phase/action ID where the failure happened. This is the most immediate way to identify the phase.

D. SAPup.log:
This is the master log file for the entire SUM procedure. It chronologically records every action, including phase transitions (e.g., "START PHASE PREPARE_COMMON_START"). The most recent entries at the bottom of this log will clearly show the last successfully completed step and the phase in which the error occurred. It is the authoritative technical record.

Why the other options are incorrect:

A. File sapevt.trc:
This trace file logs scheduler events (like job starts/stops) but is not the primary source for SUM's procedural phase information. It lacks the detailed roadmap context needed to pinpoint the specific SUM phase.

B. Transaction SM21 (System Log):
The SAP system log records events from the kernel and application server, such as process starts, shutdowns, and internal system errors. It is useful for diagnosing infrastructure issues (e.g., enqueue server crashes) but does not track SUM's roadmap phases or procedural steps. An error causing SUM to stop may appear here, but the log won't tell you if it happened during the "PREPARE_ABAP_DOWN" or "EXECUTE_IN_PLACE_UPG" phase.

Reference:
SAP Help Portal documentation for SUM (Software Update Manager) and SAP Note 2133366 (SUM Guide) direct administrators to use SUM's own logs for troubleshooting. Specifically, the SAPup.log in the SUM work directory (/usr/sap/SUM/abap/log/) is the central log, and the UI provides a structured, phase-oriented view of the procedure's status.

You are preparing for a standard SAP S/4HANA conversion from SAP ECC on Windows with SAP MaxDB to SAP S/4HAN. You are performing the corresponding maintenance transaction with the Maintenance Planner. Which kernels do you need to select for the SAP S/4HANA conversion?

There are 2 correct answers to this question.


A. Kernel for target release, Windows, SAP MaxDB


B. Kernel for target release, Linux, SAP HANA


C. Kernel for target release, Windows, SAP HANA


D. Kernel for source release, Windows, SAP MaxDB





A.
  Kernel for target release, Windows, SAP MaxDB

C.
  Kernel for target release, Windows, SAP HANA

Explanation:

During preparation in Maintenance Planner for a standard S/4HANA conversion using SUM/DMO (Database Migration Option), you must select kernels for the target release that correspond to the two distinct database stages of the process:

A. Kernel for target release, Windows, SAP MaxDB:
This kernel is required for the initial phase of the conversion. The system starts on the existing OS (Windows) and the existing database (MaxDB) but is already running the new target S/4HANA release's kernel. SUM uses this kernel to perform the initial upgrade steps on the source database platform.

C. Kernel for target release, Windows, SAP HANA:
This kernel is required for the database migration phase. The OS remains Windows, but the database changes to SAP HANA. SUM switches to this HANA-compatible kernel to perform the actual database migration and subsequent steps on the new HANA database.

The dual selection is necessary because SUM/DMO performs a combined upgrade and database migration in a single procedure, requiring kernels for both the source and target database technologies, but always for the target SAP release.

Why the other options are incorrect:

B. Kernel for target release, Linux, SAP HANA:
Incorrect. The OS is not changing from Windows to Linux in this scenario. The kernel must match the actual OS platform (Windows) throughout the entire process.

D. Kernel for source release, Windows, SAP MaxDB:
Incorrect. For an S/4HANA conversion, you are upgrading the SAP kernel to the target S/4HANA release. The source release kernel is not used by SUM for the conversion procedure itself.

Reference:
SAP Help Portal: "Maintenance Planner for SAP S/4HANA Conversion" and SAP Note 2239665 (DMO of SUM: Frequently Asked Questions). The process documentation specifies that for a DMO-based conversion, you must select the target release kernels for both the source and target database in the Maintenance Planner stack file. The OS selection remains consistent if no OS change is planned.

You are planning to use DMO of SUM to perform an "inplace-migration" to SAP HANA. What do you need to consider? Note: there are 2 correct answers to this question.


A. The source system is non-Unicode and the target database is a scale-out system.


B. The target database size increases temporarily because of the Shadow Repository.


C. SAP HANA Landscape Reorganization required a manual step to edit a file.


D. Network capacity between exporting and importing R3load processes must be.


E. Unicode conversion is part of DMO only in case of target version AS ABAP 7.40 or 7.31.





B.
  The target database size increases temporarily because of the Shadow Repository.

D.
  Network capacity between exporting and importing R3load processes must be.

Explanation:

B. Correct. In DMO (Database Migration Option), a Shadow Repository is created in the target HANA database during the PREPARE phase. This temporarily increases the storage requirement on the HANA system, as it holds a copy of the repository objects while the source system remains active. Sufficient temporary space must be planned for.

D. Correct. DMO uses parallel R3load processes for data export (from source DB) and import (into HANA). These processes communicate via TCP/IP sockets. The network throughput and latency between these processes (even if they run on the same host) are critical performance factors and must be considered during planning. A bottleneck here significantly impacts migration duration.

Why the other options are incorrect:

A. Incorrect. DMO does not support migration from a non-Unicode source system to SAP HANA. A separate, prior Unicode conversion (SUM with UCC) is mandatory. Also, HANA scale-out is supported by DMO and is not a restriction.

C. Incorrect. This describes an old constraint for SAP HANA Landscape Transformation (HLT), which is a different tool. DMO itself does not require manual file editing for reorganization tasks; it manages the migration internally.

E. Incorrect and outdated. DMO always includes Unicode conversion (if needed) as an integrated step for the application layer, regardless of the target ABAP version. The prerequisite is that the database must already be Unicode-compliant (which MaxDB, Oracle, etc., are). The statement refers to obsolete limitations of very early DMO versions.

Reference:
SAP Help Portal: "Database Migration Option (DMO) of SUM" and SAP Note 2239665 (DMO of SUM: Frequently Asked Questions) confirm the temporary space requirement for the shadow repository and the network dependency of R3load processes.

SAP Note 1793345 (DMO: Restrictions) explicitly states that a non-Unicode source system requires a separate Unicode conversion before DMO.

SUM was registered at the SAP Host Agent. Based on which information does the SAP Host Agent determine the path to the SUM directory.


A. The path was stored in file host_profile during registration


B. The path was stored to the environment of user sid adm during registration


C. The path is taken from the URL entered in the browser to start SUM


D. The path was stored in file sumabap conf during registration





D.
  The path was stored in file sumabap conf during registration

Explanation:

When SUM registers itself with the SAP Host Agent, it creates a configuration file called sumabap.conf in the SAP Host Agent's profile directory (typically /usr/sap/hostctrl/work/). This file contains the absolute path to the SUM work directory.

The SAP Host Agent reads this configuration file to locate the SUM instance and establish communication for tasks like starting/stopping the SAP system, checking processes, and managing log files during the upgrade or conversion procedure.

Why the other options are incorrect:

A. host_profile:
The host_profile is a generic host agent profile file, but it is not used to store the SUM-specific path. Registration does not modify this base profile.

B. Environment of user adm:
The environment variables of the SAP system administrator user are not modified globally. SUM uses its own runtime environment and communicates its location explicitly via the configuration file.

C. URL entered in the browser:
The browser URL is irrelevant for the SAP Host Agent. The Host Agent runs as an operating system service and does not interact with the browser or web-based interfaces; it uses configuration files for its data.

Reference:
SAP Help Portal documentation for SUM and SAP Host Agent, specifically sections on registration and configuration. The file sumabap.conf is a standard artifact created during the SUM registration process (step "Register SUM in SAP Host Agent") and is documented as the means by which the Host Agent identifies the location of the running SUM instance.

Which additional configuration options are offered by SUM when selecting "Switch expert mode on" in main configuration option "Standard"?

There are 2 correct answers to this question.


A. Keep archiving on during the whole procedure.


B. Use the Near Zero Downtime Maintenance Technology (NZDM).


C. Reuse a profile for the shadow instance from a previous run.


D. Choose the instance number of the shadow instance.





C.
  Reuse a profile for the shadow instance from a previous run.

D.
  Choose the instance number of the shadow instance.

Explanation:

In the Standard mode, enabling "Switch expert mode on" reveals advanced settings primarily related to the shadow instance management, which is a core component of the downtime-minimized approach.

C. Correct. Expert mode allows you to specify a previously created shadow instance profile (default.pfl) from an earlier SUM run. This can save time if a prior run was aborted, as it reuses the already-configured shadow system settings.

D. Correct. By default, SUM automatically assigns an instance number for the shadow instance. Expert mode allows the administrator to manually select and control the instance number (within the valid range, typically 90-99), which can be important for environment-specific port planning or conflict avoidance.

Why the other options are incorrect:

A. Keep archiving on during the whole procedure:
This is not an expert mode setting. Archiving control (switch on/off) is part of the standard main configuration options presented to all users in the downtime configuration phase.

B. Use the Near Zero Downtime Maintenance Technology (NZDM):
This is incorrect. NZDM is a separate main configuration mode (an alternative to "Standard" mode), not an expert setting within the Standard mode. You select NZDM at the beginning of the configuration, not by enabling expert mode.

Reference:
SAP Help Portal: "Software Update Manager - Expert Settings" and the SUM configuration guide (SAP Note 2133366) specify that expert mode in Standard configuration primarily offers control over shadow instance parameters, including profile reuse and instance number assignment.

In ICNV. How database operations synchronized between the source and the target table?


A. Deletes are directly executed in the target table by delete trigger.


B. AA





A.
  Deletes are directly executed in the target table by delete trigger.

Explanation:

During the ICNV process, the system creates a target table with the new structure. To maintain consistency while users are still modifying data in the source table, the database utilizes Database Triggers.

When a user performs a DELETE operation on the source table, the database trigger is immediately activated. This trigger replicates the action by deleting the corresponding record in the target table. This ensures that records intended for removal do not persist in the new table structure after the conversion is finalized.

Why Other Options are Incorrect

Updates as Inserts:
A common distractor suggests that updates are only stored in a log table. In ICNV, while some data is moved via background processes (like the REPLI phase), real-time synchronization of deletions must happen via triggers to prevent data corruption.

Manual Synchronization:
The process is not manual. If the synchronization relied on periodic background jobs instead of triggers for deletes, the target table would contain "stale" or "ghost" data that no longer exists in the source.

Log Table Dependency:
While technologies like SLT or SUM's nZDM use logging tables (e.g., IUUC_ or change logs), standard ICNV is characterized by the direct execution of triggers on the target table to mirror the source's state.

References
SAP Training Material ADM328: (SAP S/4HANA Conversion and SAP System Upgrade) – Section on "Incremental Conversion" and "Database Triggers."

What is the main purpose of performing benchmark runs for an SAP S/4HANA conversion?


A. To optimize the migration


B. To optimize the activation


C. To optimize the conversion


D. To optimize the main import





C.
  To optimize the conversion

Explanation:

The primary purpose of a benchmark run is to optimize the conversion duration by identifying the most efficient distribution of resources. During a benchmark run, the SUM tool executes specific phases (typically data migration and conversion phases) with different parallel process settings.

The results allow the administrator to determine the "sweet spot" for parallelization—balancing the number of processes against the hardware's CPU and memory limits. By finding the optimal number of R3load or parallel processes, you can significantly reduce the technical downtime of the actual production cutover.

Why Other Options are Incorrect

A & D. To optimize migration / main import:
While migration (moving data to HANA) and the Main Import (importing new software levels) are parts of the process, "Conversion" is the broader, more accurate term used in the E_S4HCON2023 curriculum to describe the holistic transformation of data structures (e.g., from Finance to Universal Journal).

B. To optimize activation:
Activation usually refers to the DDIC activation phase. While parallelization affects this, benchmarking specifically targets the conversion and migration of application data, which is typically the most time-consuming part of the downtime.

References
SAP Training ADM328 (SAP S/4HANA Conversion and SAP System Upgrade): Section on "Downtime Optimization" and "SUM Benchmark Tool."

You performed a custom code check for an SAP S/4HANA conversion. In which transactions can you review the results?

There are 2 correct answers to this question.


A. SYCM (Simplification Database Content)


B. SAT (Runtime Analysis)


C. ATC (ABAP Test Cockpit)


D. SE80 (Object Navigator)





C.
  ATC (ABAP Test Cockpit)

D.
  SE80 (Object Navigator)

Explanation:

For an SAP S/4HANA conversion, the standard tool for custom code checks is the ABAP Test Cockpit (ATC) with the S/4HANA Readiness Check check variant. The results are centrally managed and reviewed within the ATC framework.

C. ATC (ABAP Test Cockpit):
This is the primary transaction for reviewing custom code check results. It provides a comprehensive worklist, allows filtering by priority/object/check, and facilitates mass processing of findings. The ATC results show S/4HANA-specific simplification items, syntax errors, and compatibility issues.

D. SE80 (Object Navigator):
You can also review results in context within SE80. By navigating to a specific development object (program, class, etc.), you can display its ATC check results directly, which is useful for object-by-object analysis and correction.

Why the other options are incorrect:

A. SYCM (Simplification Database Content):
This transaction is used to browse the SAP Simplification List – the catalog of changes and deletions in S/4HANA. It is a reference tool, not for reviewing custom code check results.

B. SAT (Runtime Analysis):
This is a performance profiling tool used for performance optimization, not for static code checks or S/4HANA compatibility analysis.

Reference:
SAP Help Portal: "Custom Code Migration for SAP S/4HANA" and SAP Note 2183564 (Custom Code Migration Option in SAP S/4HANA) specify that the ATC is the central tool for managing the results of the S/4HANA Readiness Check for custom code. Integration with the development environment (SE80) allows direct navigation from findings to the source code.

You are using DMO of SUM. You defined 40 parallel R3load processes during uptime and 80 parallel R3load processes during downtime. You have chosen table count verification, but not table contents comparison.

Phase EU_CLONE_MIG_DT_RUN is running. In the Charts Control Center, you can see 40 process buckets being executed in parallel.

Why are 40 Process Buckets executed in parallel?


A. These are 40 pairs of Reload processes, so there are 80 R3load processes running.


B. SUM is still running in uptime; the 40 defined Reload processes are considered.


C. Without table contents comparison, only 40 R3load processes are being started.


D. There are 40 Reload processes used for copying and 40 Reload processes for table count verification.





A.
  These are 40 pairs of Reload processes, so there are 80 R3load processes running.

Explanation:

When using the DMO roadmap in SUM, the migration of data involves two distinct R3load actions for every "bucket": an export from the source database and an import into the target (HANA) database.

In the SUM Charts Control Center, the tool visualizes "Process Buckets" rather than individual R3load PIDs (Process IDs).

A single Process Bucket represents a pair of R3load processes (1 Export + 1 Import).
Since you defined 80 parallel R3load processes for the downtime, SUM divides this number by 2 to account for the pairs.
Therefore, 40 buckets running simultaneously equals $40 \times 2 = 80$ total R3load processes.

Why Other Options are Incorrect

B. SUM is still running in uptime:
The phase name provided, EU_CLONE_MIG_DT_RUN, explicitly contains "DT", which stands for Downtime. If the system were in uptime, the phase would be EU_CLONE_MIG_UT_RUN.

C. Without table contents comparison:
Table content comparison (checksums) adds overhead, but it does not dictate the number of R3load processes started. The process count is governed by your specific parameter settings in the SUM configuration.

D. Copying vs. Verification:
R3load processes are not split 50/50 between copying and verification. Table count verification is a quick check performed at the end of a bucket's migration; it does not occupy half of your defined processes throughout the run.

References
SAP Training ADM329 (SAP S/4HANA Conversion Strategy): Section on "DMO: Procedure Steps and Parallelism," which explains that one R3load pair (export/import) constitutes one migration slot/bucket.

In which part of an upgrade does SUM allow you to generate ABAP loads (SGEN)?

There are 2 correct answers to this question.


A. During downtime


B. During SPDD


C. Post-downtime


D. During uptime





A.
  During downtime

C.
  Post-downtime

Explanation:

SUM allows you to schedule ABAP load generation (SGEN) at two strategic points to balance performance impact and downtime duration:

A. During downtime:
You can configure SUM to run SGEN as part of the downtime execution phase. This ensures all necessary program loads are generated before the system goes live, but it directly increases the duration of business downtime.

C. Post-downtime:
You can configure SUM to defer SGEN to run in the postprocessing phase, after the system has been restarted and is technically available. This minimizes business downtime, but users may experience initial performance degradation as loads are generated on-demand or in the background.

Why the other options are incorrect:

B. During SPDD:
Incorrect. SPDD (Modification Adjustment for Data Dictionary) is a step where developers adjust modified SAP objects. It is a manual, interactive transaction that occurs during postprocessing, not an automated phase for mass SGEN execution. SGEN is a separate, system-wide batch process.

D. During uptime:
Incorrect. SGEN cannot be executed during the PREPARE (uptime) phase because the new ABAP programs and kernel from the target release are not yet active. The system is still running on the old release. SGEN must wait until the new release is active, which occurs only after the downtime switch.

Reference:
SAP Help Portal: "Software Update Manager - Configuration" and the SUM guide (SAP Note 2133366) specify that SGEN can be scheduled either during the downtime (EXECUTE) phase or postponed to the postprocessing phase. This is a key configuration decision in the SUM "Configure Downtime" step, allowing administrators to trade off between longer downtime and potential post-upgrade performance lag.


Page 1 out of 7 Pages