Learn, Practice, and Improve with SAP C_DBADM_2404 Practice Test Questions

  • 30 Questions
  • Updated on: 13-Jan-2026
  • SAP Certified Associate - Database Administrator - SAP HANA
  • Valid Worldwide
  • 2300+ Prepared
  • 4.9/5.0

Stop guessing and start knowing. This SAP C_DBADM_2404 practice test pinpoints exactly where your knowledge stands. Identify weak areas, validate strengths, and focus your preparation on topics that truly impact your SAP exam score. Targeted SAP Certified Associate - Database Administrator - SAP HANA practice questions helps you walk into the exam confident and fully prepared.


What attributes can you control when creating an SAP HANA Cloud, data lake instance? Note: There are 3 correct answers to this question.

A. Automatic backup creation

B. Availability zone

C. Number of coordinators

D. Compatibility with SAP IQ

E. Compatibility with Apache Hadoop

A.   Automatic backup creation
B.   Availability zone
C.   Number of coordinators

Explanation:

When you create an SAP HANA Cloud, data lake instance, several key operational and infrastructure attributes can be selected to define how the instance behaves, where it runs, and how resilient it is:

1. Automatic backup creation:
This option allows you to enable automatic backups for the data lake instance at creation time. When enabled, the system will regularly generate backups without manual intervention, ensuring data durability and recovery readiness in case of failures. This attribute is part of the managed service setup and is configurable in the provisioning wizard of the SAP HANA Cloud, data lake instance.

2. Availability zone:
You can choose the availability zone in which your data lake instance will reside. This is important for achieving regional resilience, lower latency for users, and compliance with data locality requirements. Selecting the appropriate zone helps align with business, performance, and regulatory needs of your organization.

3. Number of coordinators:
The number of coordinator nodes is a compute configuration option that you can set when creating the instance. Coordinator nodes play a central role in query management, transaction coordination, and metadata operations in the data lake’s relational engine. Adjusting the number of coordinators can help scale processing capacity and handle different workload demands.

❌ Why the Other Options Are Incorrect

D. Compatibility with SAP IQ:
SAP IQ is an independent SAP database product designed for analytics workloads. It is not a runtime option or engine within HANA Cloud, data lake provisioning, and cannot be toggled as a compatibility setting during instance creation. It exists outside the scope of SAP HANA Cloud, data lake service configuration.

E. Compatibility with Apache Hadoop:
Support for Hadoop is about external integration, not a native provisioning attribute of the HANA Cloud, data lake engine. You cannot choose “Hadoop compatibility” when you create a data lake instance. Hadoop integration depends on separate connectors and architecture decisions outside the provisioning wizard.

References
SAP HANA Cloud provisioning and data lake setup guidelines on the SAP Help Portal explain available configuration options when creating a data lake instance.

Which data stores are activated by default when you provision an SAP HANA Cloud, SAP HANA database? Note: There are 2 correct answers to this question.

A. Native storage extension

B. In-memory

C. Data lake

D. Hadoop distributed file system

B.   In-memory
C.   Data lake

Explanations:

B. In-memory:
This is the primary and default data store for any SAP HANA database (Cloud or on-premise). When you provision an SAP HANA Cloud database, it comes with a defined amount of memory (e.g., 30 GB, 120 GB, etc.), and all your row and column tables are stored and processed in this in-memory store by default to achieve the high-speed analytics and transaction processing HANA is famous for.

C. Data lake:
This is a key differentiator for SAP HANA Cloud. The SAP HANA Cloud, data lake capability (powered by SAP IQ technology) is automatically provisioned and integrated with your HANA database instance. It provides a cost-effective, high-capacity relational store for warm and cold data. You can seamlessly access data in the data lake using virtual tables (hana::data_lake) from your in-memory HANA database, creating a tiered storage architecture out-of-the-box.

Why the Other Options Are Incorrect:

A. Native Storage Extension (NSE):
NSE is not activated by default. It is an optional feature that must be explicitly enabled and configured. NSE allows you to extend the HANA database's storage by moving less-frequently accessed column store data from memory to disk (SSD), while keeping it fully managed and accessible by the database engine. You decide when and how to use it.

D. Hadoop Distributed File System (HDFS):
HDFS is not a default or integrated data store of SAP HANA Cloud. While SAP HANA can connect to external Hadoop systems (and other data sources) via smart data access or data federation, this requires explicit configuration and is not part of the default provisioning. The default tiered storage is provided by the integrated data lake, not HDFS.

Reference:

Concept: Tiered Storage in SAP HANA Cloud. The default provisioning gives you a two-tier architecture:

From which system views can you export content when using the Performance Monitor app? Note: There are 3 correct answers to this question.

A. SYS.M_WORKLOAD

B. SYS.M_SERVICE_STATISTICS

C. SYS.M_LOAD_HISTORY_HOST

D. SYS.M_SERVICES

E. SYS.M DATABASE

A.   SYS.M_WORKLOAD
B.   SYS.M_SERVICE_STATISTICS
C.   SYS.M_LOAD_HISTORY_HOST

Explanation:

The Performance Monitor app in SAP HANA is used to monitor system performance, workload distribution, and historical load trends. When exporting performance data, the app relies on specific system views that contain runtime metrics, service statistics, and workload history:

SYS.M_WORKLOAD –
This view provides detailed information about queries and transactions executed in the system, including CPU, memory, and execution times. It allows monitoring the current and historical workload per service and user. Data from this view can be exported to analyze query performance and bottlenecks.

B.SYS.M_SERVICE_STATISTICS –
This view contains statistical data for HANA services, including memory usage, CPU load, and I/O statistics. It is essential for performance monitoring and can be exported to evaluate resource consumption by different services over time.

C.SYS.M_LOAD_HISTORY_HOST –
This view provides historical load information per host, showing memory and CPU utilization trends. Exporting this data allows administrators to analyze system load patterns, identify peaks, and plan capacity or scaling actions.

❌ Why the other options are incorrect

D. SYS.M_SERVICES
– This view lists available HANA services but does not provide performance metrics or workload statistics suitable for exporting from the Performance Monitor app. It is more for service configuration than performance analysis.

E. SYS.M_DATABASE
– This view contains database-level metadata, such as status, version, and configuration. It does not provide workload or service statistics needed for performance monitoring exports.

References
SAP Help Portal: SAP HANA Database – Performance Monitoring
SAP HANA System Views Reference: SYS.M_WORKLOAD, SYS.M_SERVICE_STATISTICS, SYS.M_LOAD_HISTORY_HOST

Which user is assigned by default when you provision a data lake in SAP HANA Cloud?

A. SYSTEM

B. DBADMIN

C. COCKPIT MONITOR

D. HDLADMIN

D.   HDLADMIN

Explanation:

When you provision a SAP HANA Cloud Data Lake Relational Engine, the system automatically creates the HDLADMIN user. This account is the default administrative user with full privileges to manage schema objects, create other users, and assign roles. It is intended for initial setup and emergency administration. SAP recommends creating additional users with limited privileges for daily operations to reduce risk. The HDLADMIN password expires after 180 days by default, reinforcing security best practices.

Why the other options are incorrect:

A. SYSTEM
→ The SYSTEM user is the default superuser in SAP HANA databases, not in the Data Lake Relational Engine. While SYSTEM has full control in the core HANA environment, it is not automatically provisioned in the data lake context.

B. DBADMIN
→ DBADMIN is a common administrative account in on-premise SAP HANA setups, but it is not created by default in SAP HANA Cloud Data Lake. Instead, HDLADMIN fulfills this role.

C. COCKPIT MONITOR
→ This is not a valid database user. SAP HANA Cockpit provides monitoring and administration tools, but it does not assign a user named COCKPIT MONITOR during provisioning.

References
SAP Help Portal – HDLADMIN (Default User) (help.sap.com in Bing)
SAP HANA Cloud Administration Guide (help.sap.com in Bing)

Which user is automatically created when you add a data lake to an SAP HANA Cloud, SAP HANA database?

A. HDLADMIN

B. SAPSA

C. DBA

D. DBADMIN

A.   HDLADMIN

Explanation:

When you add a data lake to an SAP HANA Cloud database, SAP automatically creates a dedicated user account named HDLADMIN. This user serves as the administrative account for the HANA Data Lake (HDL) and is used to manage the data lake instance, including tasks such as:

Configuring data lake schemas
Managing data ingestion and storage
Controlling access permissions within the data lake

The HDLADMIN user has the necessary privileges to perform administrative operations without requiring direct access to the HANA database’s primary administrative users. This separation ensures security, clear role segregation, and easier management of HANA and data lake components independently.

❌ Why the other options are incorrect

B. SAPSA
– This user is created in on-premise SAP HANA systems as a system user for background system operations. It is not related to HANA Cloud data lake administration.

C. DBA
– DBA is a general administrative user concept but is not automatically created when adding a data lake. In HANA Cloud, the administrative roles are separated, and HDLADMIN is used specifically for the data lake.

D. DBADMIN
– This user exists in some SAP HANA Cloud databases as the main database administrator but does not automatically manage the data lake. DBADMIN and HDLADMIN serve different administrative domains: DBADMIN for the database, HDLADMIN for the data lake.

📚 References:
SAP Help Portal: SAP HANA Cloud, data lake – Administer Data Lake
SAP HANA Cloud Administration Guide, section “Adding a Data Lake to a HANA Database”
SAP HANA Cloud Tutorials: “HDLADMIN user creation and privileges”

What tools can you use to view the expensive statement trace file? Note: There are 2 correct answers to this question.

A. SAP HANA Cloud Central

B. SAP BTP cockpit

C. SAP HANA cockpit

D. SAP HANA database explorer

C.   SAP HANA cockpit
D.   SAP HANA database explorer

Explanations:

C. SAP HANA cockpit:
This is the primary, dedicated administration and monitoring tool for SAP HANA databases (both on-premise and Cloud). It has specialized diagnostic applications, including the "Expensive Statements" tile on the main database overview page. From there, you can directly analyze detailed trace files, view execution plans, and analyze performance data in a GUI designed for administrators.

D. SAP HANA database explorer:
This browser-based tool, integrated into both SAP HANA Cloud and SAP Business Application Studio, is not just for SQL development. It has a powerful "Diagnosis Files" section in its navigation pane. Here, you can browse the file system of the HANA database, locate the expensive statement trace files (typically in the trace directory with names like expensive_statement_trc*), and view, analyze, or download them directly.

Why the Other Options Are Incorrect:

A. SAP HANA Cloud Central:
This is the provisioning and lifecycle management console for SAP HANA Cloud. Its primary functions are to create, delete, start, stop, resize, and update database instances. While it provides high-level monitoring tiles (like CPU/Memory usage), it does not provide direct access to low-level diagnostic trace files like the expensive statements trace. It operates at the service level, not the deep database diagnostic level.

B. SAP BTP cockpit:
The SAP Business Technology Platform cockpit is the overall management console for your BTP account and subaccounts. It is used to manage entitlements (quotas), create service instances (including HANA Cloud), assign roles, and view global usage. Similar to HANA Cloud Central, it lacks the detailed, database-specific tools needed to open and analyze internal SQL trace files. It is a platform administration tool, not a database diagnostic tool.

Reference:
Concept: Tool Specialization. It's crucial to understand the distinct purpose of each SAP HANA tool:
Provisioning & Lifecycle: SAP HANA Cloud Central, SAP BTP Cockpit.
Administration & Monitoring: SAP HANA cockpit.
Development & Diagnostics: SAP HANA database explorer (especially in Cloud).

Which storage tier is used to manage rarely changing, persistent data?

A. Replica

B. Data lake

C. Native storage extension

D. WORM device

C.   Native storage extension

Explanation:

In SAP HANA Cloud, the Native Storage Extension (NSE) is specifically designed to manage rarely changing, persistent data. NSE allows administrators to classify data into "hot," "warm," and "cold" tiers based on access frequency. Frequently accessed data remains in-memory (hot), while less frequently accessed but still relevant data can be placed in the warm tier using NSE. This helps optimize memory usage and reduce costs without sacrificing query performance. NSE is ideal for large tables where only a subset of data is actively queried, while older or rarely changing records can be stored more efficiently.

By contrast, the other options do not serve this purpose:

A. Replica
→ Replication is used for high availability and disaster recovery, not for managing persistent, rarely changing data. It ensures redundancy but does not optimize storage tiers.

B. Data lake
→ The SAP HANA Cloud Data Lake is used for storing massive volumes of structured or semi-structured data at lower cost. It is suitable for big data scenarios but not specifically optimized for rarely changing persistent data within the HANA database itself.

D. WORM device
→ "Write Once, Read Many" devices are used for compliance and archiving scenarios where data must remain immutable. While they store persistent data, they are not part of SAP HANA’s tiered storage strategy for managing operational workloads.

Thus, Native Storage Extension (NSE) is the correct answer because it directly addresses the need to manage rarely changing, persistent data efficiently within SAP HANA Cloud. It balances performance and cost by extending HANA’s in-memory capabilities with disk-based storage for warm data.

References
SAP Help Portal – Native Storage Extension Overview (help.sap.com in Bing)
SAP HANA Cloud – Data Tiering Options (help.sap.com in Bing)

Page 1 out of 5 Pages

SAP Certified Associate Database Administrator SAP HANA Practice Questions