You are on page 1of 28

HA200

Unit1
Lesson 1
- Daily challenges:
o Complex system landscapes
o High flexibility
o Immediate result
o Massive growth of data volume
o Skilled workforce
- Problem without SAP HANA:
o Sub optimal execution speed
o Lack of responsiveness
o User frustration
o Unsupportable business processes
o Lack of transparency
o Need for aggregation
o Outdated figures
o Guessing current situation
o Reactive business model
o Missing opportunities
o Competitive disadvantage
- What is in memory computing:
o HW Technology innovation
Multi core architecture (8 CPU x 10 cores per blade)
Massive parallel scaling with many blades
64bit address space ( 2 TB in current servers)
Dramatic decline in price/performance
o SW Technology Innovation
Row and column store
Compression
Partitioning
No aggregate tables
Insert only on delta
- Past disk-centric, singular processing platforms are the bottleneck
o Long online transaction and batch processes
o Lack of flexibility
o Complex and costly database landscape
o Explosion in data volume caused major bottleneck in data transfer
o Low I/O transfer rate
o To overcome bottleneck, complex deployment architecture was added but
compromised flexibility and added cost
- Required new technology platform: unified, low latency, low complexity to support
real time business requirements
o Store massive amount of information compressed in main memory

o Utilize parallel processing on multiple cores


o Move data intensive calculations from applications layer into db layer
o Since all data is available in memory and processed on the fly, no need for
aggregated information and materialized views
o Simplify architecture, reduce latency, reduce complexity, reduce cost
o High scalability (multi core, multi threaded processors, 64 bit address space,
advancement in parallel data processing)
- Software component view
o Analytical and Special Interfaces: SQL, SQL Script, MDX, Other
o Application logic extensions: Text analytics, application function libraries
(Business function library, predictive analysis library)
o Parallel data flow computing model: parallel calculation engine
o Multiple in-memory stores: relational stores (row based, columnar), object graph
store
o Appliance packaging: managed appliance
- SAP HANA Deployment View
SAP HANA Appliance
o SAP HANA Database: name server (maintains landscape information), master
index server (hold data and execute all operations), statistics server (collect
performance data about HANA), xs server (XS service)
o SAP HANA Studio repository (repository for HANA Content LM)
o SAP Host Agent (enable remote start/stop)
o SAP HANA LifeCycle Manager (Manage SW updates for SAP HANA)
- SAP HANA:
Real time applications & real time analytics
In-memory database: optimizes memory access between CPU cache and main
memory
Predictive analytics: in-database predictive algorithms, and access to open source
algorithms via R integration
Text search and mining
Agility for business analysts and users: discover trends & outliers with lumira, adapt
to business scenario by combining, manipulating and enriching data with Explorer,
tell your story with self service visualizations and analytics with Analysis, forecast
and predict with Predictive Analysis
Lesson 2
- SAP HANA information source:
Http://help.sap.com/hana
o HANA Master guide (overview, architecture, software components, deployment
scenarios)
o SAP HANA Server Installation guide (how to install SAP HANA)
o Technical operations manual (administration tools available, key tasks of system
administrator)
o SAP HANA Database admin guide (database administration using Administration
console in SAP HANA Studio)
Lesson 3

Updates are shipped with Support Package Stacks (SPS), released twice/year,
backward compatible
SAP HANA Support Package Revisions, every 3 months
SAP HANA Maintenance Revisions, every 2 weeks
Naming convention: SPS 09 revision 71.1 = Support package stacks 9 revision 71
maintenance revision 1
SP revision= contains all fixes delivered through maintenance revisions, performance
improvement, for any customer and any scenario
Maintenance revision= contains only fixes of major bugs found in HANA key
scenarios, focus on production and business critical

Unit 2
Lesson 1
- Sizing of HANA appliance is mainly based on required main memory size
- Memory sizing is determined by the amount of data that is to be stored in memory
- Main memory size is depending on scenario: BW on HANA, Suite on HANA or
general sizing
- SAP HANA sizing consist of:
o Main memory sizing for static data
o Main memory sizing for object created during runtime (data load & query
execution)
o Disk sizing
o CPU sizing
- RAM size:
o XS: 2 x 10 core Westmere EX (2 socket system), 128 GB main memory, 160 GB
PCIe flash / SSD for log volume, 1 TB SAS/SSD for data volume, 3 x 1 GB n/w
or 1 x 10 GB n/w (trunk), redundant n/w
o S: 2 x 10 core Westmere EX (2 or 4 socket system), 256 GB main memory, 320
GB PCIe-flash / SSD for log volume, 1 TB SAS/SSD for data volume, 3 x 1 GB
n/w or 1 x 10 GB n/w (trunk), redundant n/w
o M: 4 x 10 core Westmere EX (4 - 8 socket system), 512 GB main memory, 640
GB PCIe-flash / SSD for log volume, 2 TB SAS/SSD for data volume, 3 x 1 GB
n/w or 1 x 10 GB n/w (trunk), redundant n/w
o L: 8 x 10 core Westmere EX (8 socket system), 1 TB main memory, 1.2 TB PCIeflash / SSD for log volume, 4 TB SAS/SSD for data volume, 3 x 1 GB n/w or 1 x
10 GB n/w (trunk), redundant n/w
- General sizing:
Static and dynamic RAM requirement:
Calculate uncompressed data volume to be loaded into HANA
Apply compression factor
Multiply result by 2 (because static=dynamic)
- Static RAM= amount of main memory used for holding table data (exclude associated
index), space of uncompressed data then applied compression factor to determine size
of RAM
- Dynamic RAM= amount of memory when new data is loaded or queries are executed,
same amount as static RAM

Disk sizing:
Disk size for persistence layer= 1 x RAM
Disk size for log files/ operational= 1 x RAM
Data volume size= 3 to 4 x RAM
Log volume size= 1 x RAM
Data volume have to hold: space for one data export, space for at least one process
image, shared volume for executable
CPU Sizing= 300 SAPS / active user
HANA Queries are divided into 3 categories: Easy, Medium (used 2 x resources as
Easy) and Heavy (used 10 x resources as Easy)
HANA users can be divided into 3 categories: Sporadic (1 query per hour: 80% easy
queries, 20% medium queries), Normal (11 queries per hour: 50% easy queries, 50%
medium queries), Expert (33 queries per hour: 100% heavy queries)
Default distribution of user categories: 70% sporadic, 25% normal, 5% expert
Average resource requirement: 0.2 cores per user
CPU Sizing in complex scenario influenced by: data volume and query complexity
SAP HANA can be sized using Quicksizer (calculates memory, CPU, disk, I/O
resource category), from: http://service.sap.com/quicksizer
Use Quicksizer for initial sizing recommendation
System Type:
o Single host system= system with one host (one operating system environment)
o Multi-host (distributed systems)= used to spread load over several hosts
SAP HANA system composed of:
o Host= operating environment in which HANA DB runs, provides all resources
and services (CPU, Memory, network & o/s) that HANA DB requires, provides
links to installation directory, data directory, log directory, or storage itself
(doesnt have to be on the hosts)
o System= one or more instance with the same number, term is interchangeably
with HANA DB. SID is the identifier for HANA system
o Instance= Set of SAP HANA System components that are installed on one host,
can be distributed over several hosts, but instance distributed over several hosts
must have the same instance number
Single SAP HANA host with Single SAP HANA system: perform the installation of
the first SAP HANA system with SAP HANA unified installer
Single SAP HANA host with multiple SAP HANA systems: use SAP HANA
Lifecycle Manager (HLM) to add SAP HANA system to a host where SAP HANA
system already installed with different SID & different instance #
Operating system for SAP HANA:
SUSE Linux Enterprise Servers (SLES) 11 SP2 is necessary for using hdblcm
Hardware requirements:
For software: 20 GB RAM (15 GB for basic software & 5 GB for programs)
Additional memory required for data & log volume based on requirements
During update and installation of SAP HANA DB, hardware check is performed
(script automatically called by installer)
Hardware requirement for network connection: 10GBit/s between HANA Landscape
& source system

Important directories and space required


/
root
/hana/shared
mount directory to share files between all hosts
/hana/shared/<SID>/hdbclient
/hana/shared/<SID>/hdbstudio
/usr/sap
local SAP System instance directory
/hana/data/<SID> default path to data directory
/hana/log/<SID> default path to log directory
For patching= 3 GB in working directory
HANA DB can only used 90% of physical memory
File system structure
<insert picture here>

10 GB
1 x RAM
50 GB
4 x RAM
1 x RAM

Unit 3
Lesson 1
- SAP HANA SPS7:
For update and configuration= SAP HANA Lifecycle Manager (HLM)
For installation and update= SAP HANA Lifecycle Management tool hdblcm(gui)
- Different type of installation:
o SAP HANA Appliance delivery: fast implementation, support fully provided by
SAP
o SAP HANA Tailored data center integration (TDI): more flexibility, save budget
& existing investment
- The one who installed the system has to be certified (SAP Certified Technology
Specialist E_HANAINST142) and SAP Certified Technology Associate is the pre
requisite (C_HANATEC142), refer to oss 1905389
- Most important tools:
hdblcm installation tool
hdblcmgui installation tool with user interface
HLM (SAP HANA Lifecycle Manager) different feature
- Installation with interactive mode can be done using hdblcm & hdblcmgui
Installation with batch mode can be done using hdblcm
Options:
General help: -h or --help
Installation help: --action=install h
Update help: --action=update h
Uninstallation help (called from <Installation path>/hdblcm --uninstall -h
- For trouble-shooting refer to:
Log files /var/tmp/hdblcm or /var/tmp/hdblcmgui or /var/tmp/hdbinst or
/var/tmp/hdbupd
Clean up partially installed SAP HANA System using hdbuninst
Enabling trace, switch on trace in environment variable:
HDB_INSTALLER_TRACE_FILE to <trace file name>
- HLM (HANA Lifecycle Manager) Programs:
hdbinst
command-line tool for installation
hdbsetup
installation tool with GUI for installation & update

hdbuninstall
command-line tool for un-install and remove host
hdbaddhost
command-line tool for adding host to system
hdbupd
command-line tool for updating software
hdbrename
command-line tool for renaming system
hdbreg
command-line tool for registering SAP HANA System
hdbremove-host command-line tool for removing host
Easier way is to use: HANA Life cycle management tool: hdblcm or hdblcmgui
Installation procedure:
Change to installation medium
(/hana/shared/downloads/DATA_UNITS/HDB_LCM_LINUX_X86_64)
Start installer:
./hdblcmgui or ./hdblcm
If installation is run in batch mode from installation medium, minimum required
parameter is SID and password (specified in XML syntax and streamed in or specifie
in configuration file). If you only put in SID and password, other parameters will take
default values. If mandatory parameter without default is not specified, installation
fails with error
For multi-host system, check mandatory values on each host before installation
Default parameters:
action
autostart
certificates_hostmap
client_path
/hana/shared/<SID>/hdbclient
components
copy_repository
/hana/shared/<SID>/hdbstudio_update
datapath
/hana/data/<SID>
groupid
home
/usr/sap/<SID>/home
hostname
install_hostagent
logpath
/hana/log/<SID>
number
root_user
sapmnt
/hana/shared
shell
/bin/sh
studio_path
/hana/shared/<SID>/hdbstudio
studio_repository
userid
timezone
vm
This user will be created automatically during installation:
<sid>adm
o/s user required for administrative tasks such as start & stop
Group id and user id must be unique & identical on each host of
multi-host system
sapadm
sap host agent administrator

SYSTEM
-

initially SYSTEM user has all system permission & initial


permission can never be revoked
SAP HANA System can be installed interactively with command line using hdblcm
or with graphical installation tool with hdblcmgui
Advanced installation: automated installation and the configuration of multi-host
system using hdblcm
3 different method of using hdblcm:
o command line option
./hdblcm s <SID> -n <instance#> -G <usergroupid>
o configuration file
./hdblcm --configfile=<configfilepath>/<configfilename>.cfg
o configuration file in batch mode
./hdblcm --configfile=<configfilepath>/<configfilename>.cfg -b
Certain parameters are only available on certain installation variant:
All parameters are available if using: hdblcm only, hdblcm + config file, hdblcm +
batch, hdblcm + config file + batch, hdblcmgui + config file, hdblcmgui + command
line
Reduced parameter choice, when using: hdblcm + interactive modus, hdblcmgui

Lesson 2
- Review: host grouping and storage option before installing multi-host system
- On multi-host system, additional hosts must be defined as worker machines or
standby machines
- Host types:
Worker machines process data (default)
Standby machines do not handle any processing, just wait to take over processes in
case of worker machine failure
- Server role:
Master: actual master index server is assigned on the same host as name server with
actual role MASTER. Actual index server role is MASTER. Master index server
provides metadata for other active index servers
Slave: actual index server role of remaining hosts is SLAVE (except standby host).
These are active index servers and are assigned to one volume. If an active index
server fails, the active master name server assigns its volume to one of the standby
hosts
All servers should have the same size
- Typical configuration for a distributed system:
Initial host
Name server configured role: Master 1
Name server actual role: Master
Index server configured role: Worker
Index server actual role: Master
1st host added
Name server configured role: Master 2
Name server actual role: Slave
Index server configured role: Worker

Index server actual role: Slave


2nd host added
Name server configured role: Slave
Name server actual role: Slave
Index server configured role: Worker
Index server actual role: Slave
3rd host added
Name server configured role: Slave
Name server actual role: Slave
Index server configured role: Standby
Index server actual role: Standby
Maximum number of master name server = 3
Host grouping does not affect the load distribution among worker hosts, load are
distributed among all workers. If there are multiple standby, host grouping decides
allocation of standby resources if worker machine fails.
If no host group is specified, all hosts belong to one host group called default.
There are 2 types of groups: sapsys groups and host groups
o SAP system group (sapsys group) group that defines all hosts in a system. All
hosts in multi-host system must have the same sapsys group ID
o Host group is a group of hosts that share the same standby resources only. If
multi-host system has one standby host, all hosts must be in the same host group
(default), so all hosts have access to standby host
In multi-host system, Database installation path: /hana/shared, data path:
/hana/data/<SID>, log path: /hana/log/<SID> (all shared)
Local directory: for hana1 host: /usr/sap/<SID>, hana2 host: /usr/sap/<SID>, hana3
host: /usr/sap/<SID>
Pre-requisite for multi-host system:
/hana/shared, /hana/data/<sid>, /hana/log/<sid> had to be mounted on all hosts,
including primary host
Performed following tasks after installation:
Backup, Change password (if vendor installed as appliance), finalize customization
For testing and debugging, it is possible to copy scale out landscape to single node
using SAP HANA Studio
For single host SAP HANA system, it is possible to use plain attached storage devices
(SCSI hard drive, SSDs or SANs)
In multi-host system with failover capabilities, storage must:
Standby host has file access
Failed worker host no longer has access to write to files (called fencing)
Different storage configuration:
Shared storage devices (NFS or IBMs GPFS)
Separate storage devices with failover reassignment
Externally attached storage subsystem devices are capable of providing dynamic
mount points for hosts

Unit 4
Lesson 1

Post installation steps:


o Establish SOLMAN connectivity
o Configure Remote Service Connection (via SAP router)
SAP Support can access customer database via local SAP HANA studio installation
Involved components are:
o Host Agent (communicate with HANA Database)
o Diagnostics Agent (communicate with Host Agent)
o SOLMAN (Diagnostics Agent has to be assigned to SOLMAN), consist of LMDB
(Landscape Managed Database), DBACOCKPIT, Performance Warehouse,
Alerting Framework
Remote connection to SOLMAN
Standard SAPGUI and HTTP connection to SOLMAN has to be established (oss note
962516)
Setup RCA (Root Cause Analysis), System monitoring and Early watch alert (oss
note 1747682)
SAP HANA database service connections (oss 1592925, 1635304)
Setup SSH or Telnet Remote connection (oss 1275351, 1327257)
Setup Windows Terminal Server connection (oss 605795)
Two kinds of license keys:
o Temporary license keys (automatically installed, valid for 90 days)
o Permanent license keys (if permanent license keys expired, temporary license
keys automatically installed valid for 28 days)
To install license key, use HANA Studio.
Right click on system -> Properties -> License -> Install license key
Customer can assign amount of memory to particular HANA instance, this info is
provided when requesting for license and this number will be put into generated
license key file. Once license key is installed, the number will be set in HANA
instance and showed up in HANA Studio
Only system with valid license can be backed up. Licensed will be restored with
recovery. If backup is too old and license key from backup is expired, database will
be locked after recovery and new valid license need to be installed to unlock database

Lesson 2
- Before updating SAP HANA components, make sure no read or write process are
running on SAP HANA DB. Performed update process in offline mode. After the
update, you have to start SAP HANA and its components again
- HLM functions:
o Rename SAP HANA System: change SID, instance number, hostname; change
system administrator password, change database user password
o Register in System Landscape Directory
o Add Solution Manager Diagnostics Agent (SMD)
o Update SAP HANA System: Update SAP HANA Lifecycle Manager (time
required= time for shutdown + time for restart SAP HANA + 20 minutes), Apply
Support Package Stack, Apply single support package
o Make decision for the source of the archives for update: automated update of SAP
HANA System can be executed automatically downloaded from SAP Service

Marketplace (needs host name, valid S-user & password, proxy), or manually
downloaded content (needs location of downloaded archive)
o Add/remove additional host (the system must be already started)
o Add/remove SAP HANA System (specified host name is FQDN)
o Add Application Functional Library (AFL)
o Add LiveCache application (LCApps)
o Deploy SAP HANA Application content (i.e. HANA Live, HANA RDL, SAP UI,
HAVANA)
o Change SAP HANA License Type
Available working modes for SAP HANA Lifecycle Manager (provide easy and
flexible customization)
o Using SAP HANA Studio
o Using command line interface (CLI) (applicable for heterogeneous SAP product
landscape)
o Using standalone HTML5 enabled web browser
Uninstalling SAP HANA component using uninstall.sh script, but it doesnt uninstall
SAP Host Agent and SMD Agent (need to be done first before running uninstall
script)
./uninstall.sh /tmp/hanainstdir HDB

Unit 5
Lesson 1
- Use Database Migration Option (DMO) of SUM (Software Update manager)
- Benefit: migration steps are simplified, system update and database migration are
combined in one tool, business downtime is reduce, original database is kept (can be
reactivated as fall back), lower pre requisite for SAP and DB start releases, inplace
migration keeps application server and SID stable, well known tool SUM is used with
improved UI, Unicode migration is included
- SUM is not new, it is used for Release upgrade, EHP implementation, apply SP stack
for SAP NetWeaver
- Classical way of migration: upgrade source database, upgrade application software,
migrate database, Unicode migration
- Steps of data migration:
o Upgrade prepare
o Execute Upgrade
o Switch database connection (from traditional DB to HANA DB)
o Migrate application data (include data conversion)
o Finalize upgrade
o Start SAP HANA based system
- SUM:
o Create usual shadow instance and shadow repository on database level (so
shadow system temporarily exists)
o Copy shadow repository to SAP HANA DB as target repository
o Application data are migrate HANA DB
o Target instance kernel is setup with basic software of new SAP release

o Direct access to log files to check status and error


Unit 6
Lesson 1
- Although SAP HANA is in-memory database mgmt system, data is also persisted in
data and log volumes
- Core processes on single node instance:
o Several processes running in Linux operating system
o Daemon (starts all other processes, keeps other processes running)
o Indexserver (main database process, data loads, queries, calculations)
o Nameserver (db landscape, data distribution)
o Statisticsserver (monitoring service, proactive alerting)
o Preprocessor (to feed unstructured data into HANA)
o XSengine (web service component, sometime referred to as application server)
- Shared nothing architecture, each processes (indexserver, nameserver, etc) persists
data in corresponding data and log volumes independently
- XSengine service can be deactivated and removed if not needed (oss 1867324)
- Starting from SPS7, new statistics service implementation design (makes
statisticsserver component obsolete (oss 1917938)
- Architecture of SAP HANA Indexserver
o HANA Core processes
o External interfaces (allows client to communicate with HANA, queries, data
loads, Admin)
SQL Interface
MDX Interface
Web interface
o Processing engines (operate on data, execute queries)
(page 149)
o Relational engines (store data in memory)
Row store
Column store
o Storage engine (handle data pages, handle transfer RAM Disk)
Page management (asynchronous writing)
Logger (synchronous writing)
o Disk Storage (non volatile data storage)
Data volume (asynchronous writing complete main memory content at
specific point in time)
Log volume (synchronous writing changes written to log area before
successful commit of transaction)

SAP HANA Database


External Interfaces
SQL Interfaces

MDX Interfaces

Web Interfaces
Session Management

Request Processing
SQL Optimizer

Transaction Manager

Calculation engine
OLAP
Engine

Join
Engine

Row store
engine

Relational Engine
Column Store

Metadata Manager
Row Store

Page Management

Data volumes

Asynchronous writing

Authorization Manager

Storage Engine
Disk Storage

Logger

Log volumes

Synchronous writing

Persistence
o Data:
SQL data and undo log information
Additional HANA information (modeling data)
Kept in-memory for maximum performance
Write process is asynchronously
o Log
Information about data changes (redo log) such as: insert, delete, and update
are saved to disk immediately in the logs (synchronous)
Directly saved to persistent storage when transaction is committed
Cyclical overwrite only after backup
o Savepoint
Changed data and undo log is written from memory to persistent storage
Automatic
At least every 5 minutes (can be changed)
Disk access is not performance bottleneck, since data written to Data volume
asynchronously and user doesnt have to wait for this process. When data in main
memory is read, no need to access persistent storage. When applying changes to data,
transaction cannot be successfully committed before changes are persisted to log area.
To optimized performance, log area fast storage is used (SSD)
Data volumes are located in file systems:

o One data volume per instance


o Each data volume contains one file, data is organized into pages
o Growing until disk or LUN is full
o Logical volume manager (LVM) needed on OS level to extend file systems
o Growing with number of data volumes
o Different page size (page size class: 4k, 16k, 16M) arranged in superblock of 64M
o Typical size for data volumes (4 * RAM)
HANA SP06:
o File size limitation is 2 TB, located in ext3 file system, when reached 2 TB
additional files are automatically created
o Allows usage of ext3 file system with larger memory implementation per host
o No implication to backup/recovery
o Monitoring: select * from PUBLIC.M_VOLUME_FILES
Redo log entries are written synchronously, changed data in data volumes is
periodically (asynchronously) copied to disk during savepoint operation. During
savepoint, SAP HANA flushed all changed data from memory to data volumes.
Savepoint frequency: can be configured, triggered by data backup, database shutdown
and restart, or manually (using command ALTER SYSTEM SAVEPOINT)
Shadow paging concept = write operations write to new physical pages and previous
savepoint version is still kept in shadow pages.
Savepoint phases:
o Write changed pages in parallel
Acquire lock to prevent modification
Determine log position
Remember open transaction
Copy modified pages and trigger write
Increase savepoint version
Release lock
o Wait for IO-requests to finish
o Write anchor page
In the event of database crash, data from last completed savepoint can be read from
data volumes and redo log entries written to log volumes thus data can be restored to
the last committed state.
After system restart, not all tables are loaded into main memory immediately (to
allow short restart time). Only row store is always loaded entirely, column store are
loaded if requested. Column store that are loaded (and their attributes) before system
restart are reloaded. This may not be necessary in non productive system, can be
configure in: indexserver.ini in section: sql, parameter: reload_tables=false
Startup process:
Row store: loaded completely into memory during startup & has to stay there,
secondary index created during load
Columnar store: loaded lazy on demand during startup, ensure early availability
Important factors for startup: remaining log to be rolled forward; I/O performance of
data and log disks; separate log, data & backup disk area (logically and physically)
It is possible to mark individual column for preload:

Set preload flag (possible value: FULL, PARTIALLY, NO)


Dont set every table to be preloaded, startup maybe very slow
Total amount of memory used is called used memory. Used memory = program code
(called the text) & program stack, data table & system table, memory for temporary
computation
Allocated memory pool (Free)
Database mgmt
HANA
memory
pool

Column tables
Row tables
System tables

Table
data

HANA
used
memory

Code and stack


-

Data memory is called heap


Physical memory = free + resident (consist of SAP HANA, OS and other programs)
Resident memory is physical memory actually in operational use by a process
When virtual memory needs to be used, it is loaded or mapped to real, physical
memory and becomes resident
SAP HANA reserves pool of memory before actual use, so Linux memory indicator
(top and meminfo) dont accurately reflect HANA used memory size (use HANA
monitoring feature)
When memory is required, HANA obtains from existing memory pool. When pool
cant satisfy the request, HANA memory manager request and reserve from o/s,
virtual memory grows. Once the need for memory is gone, HANA memory manager
returns memory to pool without informing Linux. So used memory smaller than
resident memory. It is normal.
HANA Database may unload tables or columns from memory, if a query require more
memory than available. This unloading based on least recently used algorithm.
Memory management in column store: although column store is optimized for read
operation, it is also provide good performance for write operation through the use of 2
data structures: main storage and delta storage.
Main storage: compression by creating dictionary and apply further compression,
speed up data load into CPU cache, equality check, compression is computed during
delta merge operation, read optimized
Delta storage: write optimized, update is performed by inserting new entry in delta
storage, only exists in main memory, only delta log entries are written to persistence
layer when delta entries are inserted
Read operation: always read from both main & delta storage and merge the result,
IMCE (in memory computer engine) uses multi version concurrency control to ensure
consistent read operation.

Delta merge operation: to move changes in delta storage into main storage, happens
asynchronously, executed on table level, use double buffer concept (adv: only needs
to be locked for a short time), minimum memory requirement = current size of main
storage + future size of main storage + current size of delta storage + additional
memory
Although table is only partially loaded, but to performed delta merge, whole table is
loaded into memory
Several ways to trigger delta merge:
o Auto merge (standard method): mergedog (process) periodically checks column
store tables that are loaded locally and determines if merge is necessary based on
configurable criteria (size of delta storage, available memory, time since last
merge, etc)
o Smart merge: application request the system to check if delta merge makes sense
now (issues smart merge hint). For example during large load, application will
disable delta merge temporarily and do a merge once load has completed.
o Hard and force merge: hard merge is manually triggered using SQL statement and
executed immediately once sufficient resources are available, force merge:
regardless of resources, triggered by passing optional parameter.
o Critical merge: database trigger critical merge to keep the system stable (example
when auto merge is disabled and no smart merge hint is sent, and delta storage has
grown too large pass the threshold)
Paged attribute access: SAP HANA can read attribute structures from disk based on
pages (reduce overhead and data dont have to be stored in memory), reduce memory
footprint, only read the needed page (not the whole column)
How to activate this feature: alter table <tablename> alter (<column> varchar(80)
column loadable, <column> varchar(500) page loadable)
Things to consider:
Columns are stored in 64Kb pages instead of bigger page structure (up to 16MB
page), because of smaller chunks lesser compression rate, can be used for all non
primary key column, if attributes are often read/changed on single record base then it
is beneficial, if column often used for analytical scans then no benefit, more suited for
Suite of HANA
Hybrid LOB: LOB can be stored in virtual files inside HANA
Before SPS6, HANA stored LOB inside row & columnar store. Disadvantage:
consume memory, cant be used for analytics, cant be unloaded to disk
Since SPS6, HANA stored LOB in virtual files inside HANA. Advantage: each LOB
has its own virtual file and will be anchored to data record, only load LOBs that are
needed, list of virtual files for LOBs are stored in M_TABLE_LOB_FILES, available
for column & row store and for all type BLOB, CLOB and NCLOB
Advantage of Hybrid LOB:
Reducing main memory consumption, in case of memory shortage LOBs are
unloaded, if the size exceed threshold then LOB can be put on disk, uses threshold to
keep only small LOBs, performance is kept stable compared to pure in memory
LOBs, bigger LOBs are immediately transferred to disk and reference is kept in table
structure, to optimized LOBs on disk - used cache with short term disposition
How to activate: use alter table statement or changeLobType python script

HANA parameters: default_lob_storage_type is applied for new columns,


lob_memory_threshold (if value is less than or equal to lob_memory_threshold then it
is stored in memory, 0 all LOB then data is stored on disk, 1 all LOB then data is
stored in memory)
Smart Data Access: enable remote data to be accessed as if they are local tables in
HANA
Advantage: operational & cost benefit; ability to access, synthesize & integrate data
from multiple system in real-time; no special syntax to access heterogeneous data
source; processing is pushed to target data source using smart query processing,
remote data type to be mapped to HANA data types using automatic data type
translation
Supported remote data sources: Teradata, SAP Sybase IQ, SAP Sybase Adaptive
Service Enterprise, Intel distribution for Apache Hadoop
New/improved SDA:
Support new remote source (oracle, MSSQL, Hadoop), extended DML to
insert/update/delete on virtual tables, Calc view support for virtual tables, deliver
generic adapter framework (ODBC data source), remote caching for hadoop sources,
support for CLOB and BLOB data types, hdbsdautil to debug remote source
configuration

Lesson 2:
- Concurrency control method: to solve the problem where one user is reading the
database and the other user is writing to it.
- Multi version concurrency control (MVCC): each user connected to db sees a
snapshot of database at that particular time, any changes to the database will not be
seen by other user until the changes are committed. MVCC uses insert only data
records, this enables long running transaction and high level parallelization.
- When SAP HANA updates an item of data, it will not overwrite the data but mark it
as obsolete and add newer version. Therefore multiple versions stored but only one is
the latest. It allows database to avoid overhead of filling the holes in memory but
requires system to periodically sweep through and delete the old, obsolete data
objects.
- MVCC is used to implement different transaction isolation levels: transaction level
snapshot isolation and statement level snapshot isolation.
- Transaction level snapshot isolation: all statement of a transaction see the same
snapshot of database. The snapshot contains all committed changes at the time the
transaction started plus changes made by transaction itself (same with SQL isolation
level repeatable read)
- Statement level snapshot isolation: different statement in a transaction may see
different snapshot of a database. Each statement sees the changes committed when the
transaction started (same with SQL isolation level read committed)
- Transaction isolation level can be changed using command: set transaction
Lesson 3:
- SAP HANA Platform edition composed of:
o SAP HANA Database (installed on SUSE Linux)

SAP HANA Client & HANA client for excel (for connecting to HANA DB)
SAP HANA Studio (application for SAP HANA Appliance software)
SAP HANA Lifecycle Manager (tool for customizing SAP HANA system)
Host Agent (tool for monitoring & control of SAP instances, non SAP instances,
o/s and databases)
o SAP HANA AFL/LCApps (application framework supporting function library
AFL pre-delivered utilized business, predictive and other type of algorithm
BFL business function library, pre-built, parameter driven used algorithm in
finance
PAL predictive analysis library, predictive analysis and data mining)
o SAP HANA RDL content package (river design language like SQL, declarative
data definition based on SAP HANA Core Data Services)
o SAP HANA INA Toolkit for HTML (built-in enablement of SAP HANA to
retrieve and visualize data in an end-user friendly way)
o SAP HANA EPM Content Package (Enterprise Performance Management to
design, deliver and operate Planning and Consolidation Applications)
o SAP HANA Smart Data Access (transparent access to remote database table via
HANA proxy tables)
o SAP HANA Studio SAPUI5 Plug-in (Java script based HTML5 browser
rendering library for Business Applications)
o SAP HANA HW Config Check (tool to verify SAP HANA software requirements
on proposed hardware capabilities)
o SAP HANA Information Composer (web based environment which allow
business user to upload data to SAP HANA DB and to manipulate data by
creating information views)
Components of SAP HANA Platform edition are divided into:
o Mandatory server components: SAP HANA Database, SAP HANA Client, SAP
HANA Studio, SAP HANA Lifecycle Manager, Host Agent, SAP HANA
AFL/LCApps
o Optional server components: SAP HANA RDL, SAP HANA INA Toolkit, SAP
HANA EPM, SAP HANA SDA, SAP HANA Studio SAPUI5 Plug-in, SAP
HANA HW config check, SAP HANA information composer
o Front-end tools
SAP HANA Platform DUs:
Default content:
Delivery units are integral part of any SAP HANA Database installation, they are
required for SAP HANA to operate as desired. Maintained as part of lifecycle
management of database component with each revision automatically
o SAPUI5 Client Runtime
o HANA XS Administration
o HANA XS LM
o HANA TA Config
o HANA XS Base
o HANA UI Integration Svc
o HANA UI Integration Cont.
o
o
o
o

o SAP HANA Admin


o SAP HANA IDE | IDE core
Optional content:
Delivered together with SAP HANA Database but not automatically become
available/active
o INA Service
o SAP HANA DXC
o HANA EPM Svc
Add-ons:
Part of SAP HANA product and available for download from SAP Marketplace can
be installed through HANA Lifecycle Manager from within SAP HANA studio
o HANA INA Toolkit for HTML
o HANA RDL Cont
Lesson 4:
- SAP HANA: in-memory database management system, also comprises many
additional features: spatial processing, search and text mining, integrated libraries.
- SAP HANA scenarios:
o Side-by-side: SAP HANA is added as additional component to existing landscape
(example: data mart)
Agile data marts:
Create more flexibility compared to Enterprise Data Warehouse
Data is loaded using ETL (Data services)
Data has been transformed
Based on analytic data models
Operational data marts:
Views calculate result for reports in real time on actual operational data
No transformation during load
Real time replication of time critical data (SLT)
SAP HANA Accelerators
Turnkey solution to accelerate (standard ABAP report, business process in
ERP)
Flexible reporting using BO BI client
Using SLT and DBSL (Database shared library)
o Integrated: SAP HANA is used as primary database
BW on HANA, SAP Business Suite powered by SAP HANA,
o SAP HANA as application platform
Any application could connect to HANA using standard interfaces: JDBC, ODBC
Native SAP HANA application can be implemented in SAP HANA without
additional application server on the basis of SAP HANA XS (extended application
services)
o Combination of multiple SAP HANA scenarios
Lesson 5:
- SAP HANA Deployment options:

o On-premise:
Pre-configured appliance: pre-configured hardware, pre-installed software,
solution validation done by SAP
HANA TDI (tailored datacenter integration): installation by customer, more
flexibility, save IT budget & existing investment
Virtualized with vmware vsphere
o On-demand/cloud:
SAP HANA One: fully featured SAP HANA hosted in public cloud, hourly
subscription basis
SAP HANA Developer edition
SAP HANA infrastructure subscription: monthly subscription basis, quickly
deploy existing SAP HANA license
SAP HANA platform as a service: platform as a service in cloud env, monthly
SAP HANA managed service: enterprise class SAP HANA in cloud, monthly
o Hybrid: migrate some solutions to the cloud
Running multiple scenarios on one system or database:
o Virtualization:
1 database schema per database
Separate HANA database per SAP System
Separate virtual machine and O/S
Shared hardware and storage
Restriction: non-production system, single node up to 1 TB
o Multiple components on one system (MCOS):
1 database schema per database
Separate HANA database per SAP System
Shared hardware, storage and O/S
Restriction: non-production system
o Multiple components on one database (MCOD)
Multiple database schemas per database
Shared SAP HANA database
Dedicated application server per application
Shared hardware, storage and O/S
Restriction: non-production system, single node/multi node
o Technical co-deployment
1 SAP HANA Database, 1 schema
1 ABAP Application server/ SID
Available for SRM and SCM as ERP Add-on
Usage: prod & non-prod, single node/multi node, can be combined with
virtualization

Unit 7
Lesson 1:
- SAP HANA Administration tool:
o HANA Studio

Administration: start/stop HANA database, backup & recovery, user & role mgmt,
configuration changes, SAP HANA modeler, lifecycle management
Monitoring: integration of all SAP HANA database, detailed views
Alerting: alerts are generated automatically, adjust alert threshold, config of email
notification
Tracing: change trace level, display trace file, view merged trace
o SOLMAN
Basic administration and holistic monitoring within existing SAP landscapes
through DBA Cockpit, solution manager diagnostics, System landscape directory
(SLD), Maintenance optimizer (MOPZ), early problem analysis and transport
integration
o SAP DBA Cockpit
Administration: schedule backup, configuration changes
Monitoring: integration of all SAP HANA database (via SLD & manually),
detailed views, integration with Solution Manager Performance Warehouse
Alerting: alerts are generated automatically, integration into SOLMAN E2E
Tracing: change trace level, display trace file
Lesson 2:
- SAP HANA Studio:
Consist of several perspective/application:
Administration console, information modeler, lifecycle management
Is used by developer to create content (modeled views, stored procedures)
Development artifacts are stored in repository
- There are 2 ways to add system in HANA Studio:
o Add system (have to provide: hostname, instance #, description, database user,
password)
o Add system archive link one user can manage list of all systems in centrally
accessible archive (File -> Export -> SAP HANA -> Landscape) and others can
link to this archive
Advantage: more efficient, avoids users having to obtain connection detail and
add them individually, users have up-to-date system access
- In system navigator screen (left hand side) you can see:
o Backup: backup configuration (destination, file size), backup catalog, snapshot
o Catalog: schemas with tables (column and row store), functions, procedures
o Content: packages (development and modeling artifacts) & views
o Provisioning: smart data access, remote data source, proxy tables
o Security: users and roles, security settings
- From Context menu of Systems view, you can:
Add system, stop/start/restart system, open system properties, backup/recover system,
storage snapshot, import/export catalog object, open SQL console, find table, open
table definition
- Administration Console perspective: contains db admin and monitoring feature
There are 3 screen areas
Systems view (left-hand), editor area (top right-hand), other views (bottom righthand)

There are these tabs:


Overview, landscape, alerts, performance, volumes, configuration, system
information, diagnosis files, trace configuration
Overview tab: most important information about system at a glance (can navigate to
more detail information)
System status, system information, current alerts, memory usage, CPU usage, disk
usage
SAP HANA Studio normally collects information about system using SQL, but when
system is not yet started/down no SQL connection is available then HANA Studio
collects information using SAP start service (sapstartsrv). This information can be
viewed in diagnosis mode as operating system user <sid>adm

Lesson 3:
- Start DBA Cockpit using tcode: DBACOCKPIT
- DBACOCKPIT layout
Application Toolbar
System Landscape
Toolbar

Central system data

Action area
Navigation Frame
Action Message Window
Framework message window

Application Toolbar: basic function to display/hide system landscape toolbar and


navigation frame
System Landscape Toolbar: central function to manage system landscape: manage
database connection and choose system to monitor
Navigation Frame: quick access to analysis information, i.e. performance
monitoring, space management, job scheduling
Navigation frame contains: current status folder (overview and alert), configuration
folder (.ini file), performance, jobs, diagnostics, system information
Framework Message Window: complete history of messages sent during the session
Central System Data: provides information: time of last refresh, database startup
time and database name
Action Area: displays details of currently selected action
Action Message Window: additional information for selected action
DBA Planning calendar only available in DBACOCKPIT, not in HANA Studio
Integrating SAP HANA as remote database (with SOLMAN version 7.10 SP04)
Prerequisite for SOLMAN integration:

Installation of HANA client software, kernel version min 7.20 patch 100, SAP HANA
DBSL min 7.20 patch 110, SAP Host Agent min 7.20 patch 84, SAP SOLMAN
Diagnostics Agent
Refer to these oss #: 1664432, 1612172, 1672429, 1721598
To connect to remote SAP HANA database, add secondary database connection
Define: connection name, database system (SAP HANA DB), user name (SAP
HANA DB user with monitoring privileges) & password, Database host, SQL Port
(3##15)
(can also be done from other system, doesnt have to be from SOLMAN, as long as
you setup secondary database connection, can be done from FID)

Current status: overview of the statuses of the most important database resources
(disk space, memory, CPU, services, alerts, time when db was started)
Performance: performance relevant information
Configuration: overview of the configuration file
Jobs: DBA Planning Calendar
Diagnostics: Trace possibilities (SQLDBC trace, database trace, explain)
System information: deeper investigation when analyzing performance issues
Documentation: link to documentation available on SDN
System landscape
You can analyze performance of your database system using Performance Warehouse,
prerequisite: SOLMAN with SMD is enabled. All performance indicators are stored
in BI system and is used by SMD (to configure: use SMD Setup Wizard)
Diagnostics consist of:
Audit log (all actions that make changes to db), missing tables and indexes (not
available for remote system), explain (execution plan for select, insert, update,
delete), SQL editor (to execute SQL statement), tables/views (display/monitor
table/view), Diagnosis file (used for SAP HANA DB that are offline), SQLDBC
Trace (activate, deactivate, analyze SQLDBC trace), Database trace (activate,
deactivate, analyze trace)
System Information consist of:
Connections (detail info about open connection), transactions (display open
transactions), connection statistics (network IO statistics), cache (cache created by
SAP HANA DB), query cache (where SQL statement execute are cached), large

tables (largest table in SAP HANA, table sizes, delta sizes, fastest growing table),
SQL workload (overview of statements that were executed)
Lesson 4:
- HDBSQL features:
Execute SQL statement
Execute DB procedure
Request information about database catalog
Execute shell commands
Execute commands
Overview of all HDBSQL call options
Overview of all HDBSQL commands
- Two different options: one step logon (with user name and password) and two step
logon (start hdbsql first and connect to system)
- One step logon command: hdbsql [<options>] n <database_host> -i <instance_id>
-u <database_user> -p <database_user_password>
- Two step logon command:
hdbsql [<options>]
\c [<options>] n <database_host> -i <instance_id> -u <database_user> -p
<database_user_password>
- Command to display general information: \s
- Command to exit: exit or quit or \q
- Command to display all command: \? or \h
- Using hdbuserstore to connect to SAP HANA, located in
/hana/shared/<SID>/hdbclient
- To create an entry in hdbuserstore:
hdbuserstore SET <userkey> <hostname>:3##15 <username> <password>
- To display all user store key:
hdbuserstore LIST
Unit 8
Lesson 1:
- Different ways to start/stop SAP HANA system:
HANA Studio (must know <sid>adm)
Use OS command:
Login as <sid>adm and execute: HDB start or HDB stop this only starts and stops
in local host
As root and execute: sapcontrol nr 00 function Stop ALL
sapcontrol nr 00 function Start ALL
sapcontrol nr 00 function GetProcessList
Or using <sid>adm and using sudo (define first in sudoers)
This command is used in scale-out HANA system
- When system is started, these activities are executed:
o Database receives the status of the last committed transaction
o All the changes of committed transactions that were not written to data area are
redone

All write transactions that were open when db stopped are rolled back
Row tables are loaded into memory
Savepoint is performed
Relevant column tables and their attributes are loaded into memory
asynchronously
Stopping SAP HANA database:
o Hard: forces all db on all hosts to stop immediately
o Soft: triggers a savepoint operation before stopping all db services
o Stop wait time out: how long to wait for service to stop, if timeout expires
remaining services are shutdown
You can start or stop individual database services with system privilege: SERVICE
ADMIN, these options are available:
Stop stop normally and restarted
Kill stop immediately and restarted
Reconfigure service service is reconfigured and changes to parameters are applied
Start missing services any inactive services are started
o
o
o
o

Lesson 2:
- Configuring SAP HANA:
o Database user logon: connection detail and authentication
o JDBC trace: enable JDBC trace to identify issue with HANA studio connectivity
o License: display and install new license key
o Resource: change description of system
o SAP system logon: enter and store <sid>adm credential
o Security: maintain SAML identity provider
o Version history: display version and installation time
o XS properties: maintain XS host
- Organizing SAP HANA Systems in folders: only available from Administration
console perspective
- Maintaining SAP HANA Studio preferences: Window -> Preferences
If all the other services are running, but there is an error, this could be because
sapstartsrv cant be reached because HTTP proxy is incorrectly configured:
Go to Windows -> preferences -> network connections -> change from Native to
direct
- Parameters can be changed and displayed in Configuration tab of Administration
editor. You need privilege: INIFILE ADMIN
Parameters that are active at system level are indicated by icon
Parameters that are active and deviate from default are indicated with green icon
- Configuration files are located in:
o Global parameters: /usr/sap/<sid>/SYS/global/hdb/custom/config
o Server parameters: /usr/sap/<sid>/HDB/<instance #>/<server name>
- During installation of SAP HANA db, these configuration files are created:
o sapprofile.ini system ID information: SID and instance #
o daemon.ini info which db services to start

o nameserver.ini global info, system specific landscape ID, assignment of roles


(MASTER, WORKER, or STANDBY)
Global_allocation_limit parameter is calculated: 90% of the first 64 GB of available
physical memory on the host + 97% of each further GB
Save_point_interval_s: how often internal buffer is flushed to disk and restart record
is written
Log_mode is set to normal: for point in time recovery
Log_mode is set to overwrite: can only recover to specific data backup
Enable_auto_log_backup: to prevent log full situation that can caused db to freeze
Log_buffer_size_kb: sets size of one in memory log buffer (higher buffer size:
increase throughput, but COMMIT latency is higher)
Content_vendor: has to be maintained before creating delivery unit

Lesson 3:
- When to use column store:
Calculation on small number of column, table is searched based on the value of few
column, table has large number of column, table has large number of rows and
columnar operations are require, high compression rates shall be achieved
- Sample SQL command to create column store table:
Create column table <schema>.<table name>
(<column1> <type>(<length>) default NOT NULL,
<column2> <type>(<length>) default,
Primary key (<column1>))
- When to use row store:
Processing single record at one time / many select and update, accessing complete
records, column contains distinct value, no aggregation or fast search required, small
number of row
- Open tables and views in different ways:
Table definition info about table structure and properties
Table content execute select statement on the table
Data preview analyze the content in a different way
- Table partitioning and distribution:
Split column store table horizontally into disjunctive sub-tables/partitions
Additional DDL statement for portioning: create partition, move partition to other
hosts, add/delete partition, re-partition table, merge partition to one table
- Advantage of partitioning:
Load balancing (across multiple host), parallelization (several execution threads),
partition pruning (improve response time), improve performance of delta merge
(depend on size of main index), size limitation of column store table (max 2 billion
rows), explicit partition handling
- Single level partitioning:
Hash partitioning: distributed equally
Range partitioning: dedicated partition for certain value range
Round robin: distributed equally like hash but no need to define partitioning column
(tables must not have primary keys)

Special time selection partitioning is called: aging (partitioned into different


temperature like hot or cold)
Table distribution editor:
Move tables and partitions to other host in the system
Partition non-partition tables
Change partitioned table into non-partition by merging its partition
For partition table, there are 2 checks:
o General check: consistency check
Call check_table_consistency (check_partitioning, <schema>, <table>)
o Data check: general check plus check if all rows are located in correct parts
Call check_table_consistency (check_partitioning_data, <schema>, <table>)
Call check_table_consistency (repair_partitioning_data, <schema>, <table>)
There is an option to replicate table to multiple hosts, useful when master data has to
be joined with other table that located in multiple host and you want to reduce
network traffice
Create column table <table1> (I int primary key)
replica at all locations
SAP HANA manages loading and unloading tables into and from memory
independently, but you can do it manually:
Load and unload table command:
Load <table_name>
Unload <table_name>
Load and unload individual columns:
Load <table_name> (<column_name>, )
Unload <table_name> (<column_name>, )
You can also do delta merge manually:
Merge delta of <table_name>
You can export catalog objects (including table) to file system and import it back to
another database (meta data only or meta data and content)

Lesson 4:
- Administrative tasks:
o Initial tasks: full data and file system backup, install valid license
o Regular tasks:
Check system status: overall system state, general system information, alert,
CPU/memory/file system utilization
Check status of services (from landscape tab): list of services, status, detail
resource consumption, can restart/kill/stop/reconfigure services, can reset memory
statistics
Perform data backups
Check alerts and error logs
Check performance
Check volume configuration
Maintain configuration
Check system information
o On-demand tasks:

Check diagnosis files


Activate and analyze additional trace
Avoid log full situation
Avoid log backup area becoming full
Monitor disk space used for diagnosis files
Monitoring resource utilization and memory allocation:
Showed peak used memory, used memory. There is also feature to limit maximum
memory consumption (parameter: statement_memory_limit=<integer> in GB, from
global.ini memorymanager)
Memory Display:
o show memory peak used
o show memory used
o can reset memory statistics
Memory Overview Editor:
o detail of memory usage (pie chart): physical memory, used memory, memory
usage of table, memory usage of database management
Memory allocation statistics editor:
o for each selected services, components are listed based on current used memory
o SAP HANA used memory displayed in pie chart
o for each component, alllocators are listed based on inclusive memory
o top 10 highest consuming allocators
Host sub-tab displayed:
o all hosts in distributed system
o failover status
o host reconfiguration option
o remove host from system
Redistribution sub-tab displayed:
o redistribution of data before removing a host
o redistribution of data after adding a host
o optimize table distribution
o optimize table partitioning
System replication sub-tab displayed:
o initial system replication configuration to establish connection between 2 identical
system
o system replication status to make sure that both systems are in-sync
o trigger failover to secondary system
Alert sub-tab:
o Current alert
o Detail information of individual alert
o Alert sorted by time period (last 15, 30, 60, 120 minutes, today, yesterday, last
week)
In Performance tab:
o Threads
o Jobs
o Expensive statement

o SQL plan cache


o Blocked transaction
o Sessions
o Load
Threads sub-tab: all running threads, blocked threads, detail of threads, end of
operation, view call stack. Threads are group by: connection ID, call hierarchy,
duration
Sessions sub-tab: all sessions (active/inactive), blocked session, statistics (avg. query
runtime, # of DML and DDL statement), cancel sessions
Blocked transaction sub-tab: showed transactions that are unable to be processed
further because need to acquire transactional lock that are currently being held by
another transaction. Or being blocked waiting for another resources (disk or network)
SQL Plan cache sub-tab: for performance analysis, overview of statement executed in
the system, stores compiled execution plans of SQL statement for reuse, for
monitoring: keep statistics of each plan
Expensive statement sub-tab: individual SQL queries whose execution exceed the
threshold, may reduce performance of the database, trace records information for
further analysis. Expensive trace is de-activated by default.
Job progress sub-tab: monitor long running transaction such as: delta merge, data
compression, delta log replays; current high load; start time; when will they finish
Load sub-tab: display of current performance (CPU usage, memory consumption,
table unloads)
Volume tab, you can monitor:
o Disk usage
o Volume size
o Other disk activity statistics
S
S
S

You might also like