You are on page 1of 44

Steve Johnson, EPM

Infratects Annual EPM Infrastructure Event


March 14 15 2014, Amsterdam, The Netherlands

Reasons for change


Considerations
Overview of Common Parameters
Analysing logs to validate the impact of change
Examples

The default settings can work


Application density, complexity
Business processes used within FM.
Take advantage of available hardware resources
Performance issues
Indicators in the logs

High density apps with large sub-cubes may benefit


from tuning.
Large memory requirements of many scenarios being
accessed together.
Data frequency Monthly, Weekly, daily applications
If underlying data volume and concurrency is not large
then default settings may be sufficient.

Concurrent consolidation requirements.


Processing data across multiple scenarios
Many consolidations
Taskflows executions web components
High Reporting requirements
Extended Analytics usage

64 bit servers
Hexa/octo core CPUs
Use available Physical memory
Increase threads to tasks

Long running consolidations


Reports SV data retrieval
Running Tasks queuing
Server hangs instability

Note there may be other contributors


- Relational database
- Application design
- Network latency

To many/few FreeLRU
Pager Stats
Grow Cache
SYSINFO
Memory issues Not enough storage available

Data Frequency and density


Data Cache memory pool
Available physical memory
Number of application
Application server purpose
Business processing requirements
Impact on database

A subcube is collection of records for Entity, Scenario,


Year and Value dimensions combination.
A record holds data for all base periods for intersection
Each Record has a index and data strip
Data strip contains value and status for all periods
Data strips stored in data cache, indexes allocated as
needed.

Vast majority of application Monthly


Data record size 112 bytes (in Weekly 472 bytes, daily 3296 )
Size of index is approx 100 bytes

So for an monthly application with 1 million records


MaxCachesSize 112*1,000,000 bytes, or 106 MB
Private memory usage for this application would be
100+ 112 * 1,000,000 = 212 MB

The subcube Data Cache is allocated at startup.


Once allocated never freed
It is controlled by MinDataCacheSizeinMB and
MaxDataCacheSizeinMB registry settings.
If demand exceeds pool space records paged out to
disk to make space for new subcubes.

64 bit architecture only limited by physical memory,


virtual address space 8TB.
32 bit architecture limited by virtual address space
/3GB switch.
So using 64 bit allows
Utilization of all physical memory (due to no virtual address
space limitation).
Larger cache sizes can be configured. Higher performance less
trips to database.
Good for large, high density applications with high memory
requirements.

The number of FM applications on the server is


important.
Each application can be tuned different or on server
level.
Divide total physical memory on the server by number
of applications to get available per application.

A dedicated consolidation cluster may require different


tuning parameters to the main reporting cluster
serving FR and Smart View connections.
Review reasons for FreeLRU running and tune
appropriately.
All servers within the same cluster should have the
same settings applied.

Whilst editing the registry settings it is also important


to monitor the impact on the RDBMS.
Tuning could lead to increased connections to the
database.
It could also increase workload on the database due to
number of trips to retrieve data.

Many concurrent consolidations required


Large number of Taskflows required
High Smart View or reporting requirements.
Extended Analytics requirements
Concurrent activity across servers and clusters.

Application specific
MaxNumDataRecordsinRAM default 1,000,000
MaxNumCubesInRAM default 30,000
MinDataCacheSizeInMB default 130 MB
MaxDataCacheSizeInMB default 260 MB
NumMinutesBeforeCheckingLRU default 15 mins
NumCubesLoadedBeforeCheckingLRU default 100
NumMaxDBConnections default 40
SQLCommandTimeout default 60
NumConsolidationThreads default 4 max 8
Number of VB script engines default 32 max 64

MaxNumConcurrentConsolidations default 8 max 8


MaxNumCommitThreads default 8 no limit
Maximum number of multithread connections allowed when
updating several subcubes default 8 no hard limit

NumAsyncThreads default 3 recommended 2 x


number of cores
Number of concurrent ICP reports or Taskflows on web server

Registry Changes
Monitoring activity on the HFM application servers.

Use to validate changes in registry are being applied on


application startup. Search for registry. Test
application has MaxNumDataRecordsInRam set to 3
million.

STATDIM4 application is using default values

Use to validate Data Cache size is appropriate and no records are


being paged to disk. Search for Pager S. Test application has
1,084,750 records paged to disk!

PagedNodes records currently on disk


SingleRefNodes Total number of records in subcube engine
PagedOutOps Total number records pages out in app lifetime.

To check what settings are applied for an application


search for Pager(wof)
MinDataCacheSizeinMB =750,
MaxDataCacheSizeInMB=1500
GrowbyDataCacheSizeInMB =25
MaxCacheSizeInDataRecs = 14,043,428

FreeLRU the least recently used subcubes released


from memory.
If MaxNumCubesInRAM or the
MaxNumberDataRecordsInRAM is exceeded the
FreeLRU routine is activated.
The settings NumMinutesBeforeCheckingLRU and
NumCubesLoadedBeforeCheckingLRU control how
often the FreeLRU is checked for limit compliance.

Search for FreeLRUCachesIfMoreRAMIsNeeded

Here FreeLRU is running as NumDataRecordsInRAM


value 1 million is exceeded.
At the time between 891 and 1832 cubes are held in
RAM.

Here FreeLRU is running on NumCubesInRAM after


MaxNumCubesinRAM was lowered to 100.

Data cache provides space for subcube records


If total records exceed Data Cache then pages to disk
If total records too low then excessive trips to
database.
Based on why FreeLRU is running NumCubesInRAM or
NumDataRecordsInRAM different changes need to be
applied.
Larger number of cubes can be held by increasing
MaxNumCubesInRAM.
MaxNumDataRecordsInRAM can be increase or
lowered depending on the requirements.

The Data Cache grows when new subcubes need to be


loaded and the MinDataCacheSizeInMB is exceeded.
The cache grows based on GrowByDataCacheSizeInMB,
default value is 25MB.
The data caches grows until the
MaxDataCacheSizeInMB is reached.

UsedVirtualMem ALL processes


UsedPhysicalMem ALL processes
NumUsers users logged into app across the cluster
PID Process ID of hsvdatasource

The subcube engine produces the best performance


when all the subcubes required for a query reside in
memory, so aim is to:
Set records held in memory as high as possible within physical
memory limits.
Increase data records held on FM app server cache
Reduce round trips to database increase performance
Leave enough resources for O/S and other processes

Set Cache Size high enough to avoid paging to disk


Avoid slower performance from page operations
Conserve disk space on FM App server

Based on single monthly application

For weekly divide NumDataRecordsinRAM by 4


For daily divide NumdataRecordsinRAM by 30

Client 32GB RAM 64 bit consolidation server


1 Main Production application
MaxNumberDataRecordsInRAM set to 40,000,000
MinDataCacheSizeinRAM set to 3000 MB
MaxDataCacheSizeinRAM set to 6000 MB
Multiple consolidation batches running thorough the
day totaling 40+ consolidations.

Private memory usage grows to over 30 GB causing


server instability.
FreeLRU is triggered on number of cubes in memory.
FreeLRU is also running on number of records held in
memory. (so over 40 million records are regularly being
held in the cache)
Hsvdatasource process is growing to 25 GB private
Consolidation sometimes hang and are taking a long
time to complete.

From the logs it was clear the tuning applied caused


the total available memory for the server to be under
5% at which point server behavior became unstable.
FreeLRU was not running regularly enough to drop the
records from RAM and freeing up memory.
Too many records were being held in RAM leading to
high memory footprint.
The application had very high data volumes and this
data was being accessed frequently over multiple
scenarios.

MaxNumberDataRecordsInRAM lowered to 30,000,000


MinDataCacheSizeinRAM set to 2250 MB
MaxDataCacheSizeinRAM set to 4500 MB

Memory footprint of the HFM datasource process stayed


around 18-23 GB.
Overall server stability and performance increased.

Client 16bGB RAM 64 bit FM app server.


1 main Production application, 1 other smaller app
Tuning applied to main application
MaxNumberDataRecordsInRAM set to 30,000,000
MinDataCacheSizeinRAM set to 2250 MB
MaxDataCacheSizeinRAM set to 4500 MB

FM application server runs out of disk space


FM processing is unable to complete

From the logs it was clear that the application had


been caching many records to disk.
Available memory on the server was fine.
The Data Cache was too small to hold the amount of
records in RAM.
MaxNumberRecordsInRam of 30 million setting looked
fine for memory available and number of applications
in PROD.

Main application is monthly but also weekly data.


This means MaxNumberDataRecordsInRAM has to be
divided by 4 to give optimal values
MaxNumberDataRecordsInRAM lowered to 7,500,000
MinDataCacheSizeinRAM set to 2250 MB
MaxDataCacheSizeinRAM set to 4500 MB

No paging during normal HFM executions

All settings documented in hfm_admin.pdf


http://docs.oracle.com/cd/E17236_01/nav/portal_5.htm

Any Questions?

You might also like