You are on page 1of 14

W I N T E R C O R P O R A T I O N

W
H
I
T
E

P
A
P
E
R
SPONSORED
RESEARCH
PROGRAM
LARGE-SCALE TESTING OF
THE SAP NETWEAVER
BI ACCELERATOR
ON AN IBM PLATFORM
S
p
e
c
i
a
l
i
s
t
s

i
n

t
h
e

W
o
r
l
d

s

L
a
r
g
e
s
t

D
a
t a b a s e s
411 WAVERLEY OAKS ROAD, SUITE 328
WALTHAM, MA 02452
781-642-0300
RICK BURNS
March 2008
2008 Winter Corporation, Waltham, MA. All rights reserved.
Duplication only as authorized in writing by Winter Corporation.
LARGE-SCALE TESTING OF
THE SAP NETWEAVER
BI ACCELERATOR
ON AN IBM PLATFORM
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
3
Table of Contents
1 The Challenge Facing SAP NetWeaver BI Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
2 Overview of SAP NetWeaver BI Accelerator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
3 Project Jupiter Test Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
3.1 System configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
3.2 Data and Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
3.3 Test Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
4 Project Jupiter Test Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
4.1 Load Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
4.2 Single User Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
4.3 Multi-User Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
4.4 System Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
4.5 Summary of Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
The Challenge Facing SAP NetWeaver BI Accelerator
SAP NetWeaver BI customers, like data warehouse users everywhere, face growing demands to extract
increased value from their accumulating data through deeper analysis and exploration. This places
greater stress on the data management infrastructure to support larger data volumes, accessed by more
users in increasingly varied and unpredictable patterns. Leading companies have thousands of users
who want to analyze tens of terabytes of historic information. The results of these queries drive critical
tactical decisions by information workers across the enterprise, as well as long term strategic plans by
central office business analysts.
SAP developed the NetWeaver BI Accelerator to address the emerging demand for such large-scale, ad
hoc analytic activity among SAP NetWeaver BI users. Previous testing performed by the SAP engineering
team, working jointly with WinterCorp, demonstrated initial success, but at relatively modest scale.
These tests, conducted in 2006, showed predictable performance across a varied workload, with good
scalability and load performance. At the time, testing was limited to less than one terabyte of user data,
tens of concurrent queries, running on a single blade enclosure with up to 14 nodes. While noting
the significant quality of the results obtained in these tests, our research report
1
indentified large-
scale testingincreasing data volume to many terabytes and query concurrency sufficient to support
thousands of active usersrequiring higher levels of parallelism across multiple blade chassis, as an
important near-term goal.
In 2007, SAP and IBM engaged WinterCorp to monitor and independently report on Project Jupiter, a
joint effort by SAP and IBM to conduct large-scale scalability tests of the SAP NetWeaver BI Accelerator
on an IBM-provided infrastructure, against user data volumes between 5 terabytes and 25 terabytes. This
paper describes our analysis of the Project Jupiter test results. Highlights of our findings include:
Single user query performance that scales quite well as the volume of data processed grows;
Multi-user tests that demonstrate linear scalability in both throughput and response time across the
full range of data volume tested5 TB to 25 TB;
High throughput in multi-user tests, at greater than 100,000 reports per hour at all test scales;
Concurrency of 100 to 800 concurrent query streams, with optimal results at 200 concurrent query
streamsenough to support an active user population in the thousands;
Ability to load data into the BI Accelerator from an existing SAP NetWeaver database, deployed on
IBM DB2, at rates in excess of one terabyte per hour.
Effective parallelization of query activity across up to 135 nodes, using up to 10 IBM blade chassis.
These results demonstrate the ability of SAP NetWeaver BI Accelerator to address the growing
requirements of SAP NetWeaver BI users for ad hoc data analysis at large-scale.
A detailed description of the Project Jupiter tests follows.
4
1
See: The SAP NetWeaver BI Accelerator: Transforming Business Intelligence, by Rick Burns & Robert Dorin,
WinterCorp, September 2006. This research report is available on the SAP and WinterCorp web sites.
1
Methodology
WinterCorp was retained by SAP and IBM to provide an independent assessment of the large scale tests of SAP
NetWeaver BI Accelerator on IBM infrastructures conducted as part of Project Jupiter., The assessment is based
on extensive review of Project Jupiter test plans, on-site observation of some Project Jupiter testing, and detailed
examination of Project Jupiter test results.
WinterCorp, SAP and IBM jointly agreed to publish the results of this assessment. As part of its agreement with SAP
and IBM, WinterCorp retains complete editorial control of the content and presentation of the published assessment.
SAP and IBM had the opportunity to comment prior to publication, but all findings, analysis, and conclusions are the
responsibility of Winter Corporation.
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
Overview of SAP NetWeaver BI Accelerator
To begin, a brief review of SAP NetWeaver Business Intelligence Accelerator is necessary. The BI
Accelerator is an appliance-like product composed of integrated hardware and software that provides
improved query performance and the flexibility to ask any question, anytime of NetWeaver managed
data by reducing the need for aggregates. It is available as an optional add-on to SAP NetWeaver BI. Its
use is completely transparent to analysts and applications, and it requires no changes to a customers
existing data model.
The BI Accelerator is packaged as a collection of high-density blade servers with pre-installed SAP
application software, which is network attached to the SAP NetWeaver BI server (Figure 1). The blade
servers provide a compact processor cluster, ideal for the parallel processing performed by the BI
accelerator. The BI accelerator contains copies of selected InfoCubes, reformatted, compressed, and
partitioned to optimize rapid search and analysis. A shared file system, accessible to all blades, provides
persistent storage of the reformatted InfoCubes. The BI accelerator can quickly resolve data intensive
queries using a divide and conquer approach on its in-memory database to sift through and analyze
large volumes of data in parallel to produce a relatively small answer set.
Figure 1: SAP NetWeaver BI Accelerator Architecture
Project Jupiter Test Description
Project Jupiter was a joint program of SAP and IBM designed to test query and load performance of
the SAP NetWeaver BI Accelerator at very large scale. Its primary goal is to demonstrate scalability of
the BI Accelerator to efficiently manage large databases and effectively exploit very large parallel BI
Accelerator environments.
The Project Jupiter test program involved constructing NetWeaver BI databases at three scale
points5 TB, 15 TB, and 25 TBloading the data into the BI Accelerator and executing a suite of query
tests at each scale point. The test system environments and data volumes for each scale point are shown
in Table 1. Query tests measured response time and throughput at multiple query concurrency levels.
Load tests measured the rate at which the BI Accelerator system was populated with data from the
NetWeaver BI database. Resource consumption, including processor time and IO rates, were monitored
for all tests.
5
2
3
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
The sections below describe the details of the test system configuration, the data model and queries
employed for the test, and test protocol.
Table 1: Project Jupiter system sizes and data volumes
Blade Number Data Index Total
Scale Chassis Nodes RAM of Rows Volume Volume Volume
5 TB 2 27 432 GB 6 B 3.9 TB 1.1 TB 5 TB
15 TB 6 81 1296 GB 18 B 12.7 TB 1.8 TB 14.5 TB
25 TB 10 135 2160 GB 30 B 20.7 TB 2.9 TB 23.6 TB
3.1 SYSTEM CONFIGURATION
The system landscape for the Project Jupiter tests provided a complete NetWeaver BI infrastructure on
IBM servers and storage systems. It included a SAP NetWeaver BI Server running on a 64-processor IBM
System p p595 SMP server, a NetWeaver database server on a 32-processor IBM System z9 mainframe
running DB2 for zOS V9, the BI accelerator running on a large array of IBM HS21 BladeCenter blades,
spanning two to ten blade chassis, and a small array of query drivers running on IBM System x servers.
Systems were attached to a high-speed Ethernet network that provided connectivity to all servers and
high bandwidth inter-processor communications among the BI Accelerator blades.
Disk storage was provided by six fibre attached IBM System Storage DS8300 storage arrays containing
an aggregate of 150 TB of disk space. IBMs General Parallel File System (GPFS) was used to provide
shared storage pool for all BI Accelerator blades. IBMs GPFS provides high-performance access
from multiple servers to a large shared storage pool. It is ideally suited to supporting large-scale BI
Accelerator installations, and confers a distinct advantage to hosting large BI Accelerator systems on
an IBM infrastructure.
BI Accelerator IO processing was offloaded to a set of storage servers on IBM System x processors. The
GPFS Network Shared Disk (NSD) storage server nodes and the BI Accelerator blades were connected
via Infiniband, providing 80 Gbps dedicated bandwidth to each blade chassis. The GPFS NSD storage
server nodes were connected to the storage system via 20 Gbps Fibre Channel, with two fibre channel
connections to each storage server node. The system was designed to avoid test performance constraints
due to IO bottlenecks. A detailed description of the landscape is shown in Figure 2.
Use of the BI Accelerator blades varied for each test scale (Table 1). At the 5 TB scale, 2 blade chassis
containing 27 blades were used, with 6 chassis and 81 blades at the 15 TB scale, and 10 blade chassis
containing 135 blades at the 25 TB scale. Spare blades, one for every two chassis, were available in case
of blade failure. IBM provides the only infrastructure certified to run SAP NetWeaver BI Accelerator
at this scale.
In the BI Accelerator, data is partitioned across the nodes of the system, with one partition per node,
up to the maximum of 40 partitions per table. Multi-threaded parallel processes are allocated across
the nodes to service each data partition. This yields a degree of parallelism ranging from 27 at the 5 TB
scale, to 135 at the 25 TB test scale. Being multi-threaded, these processes use intra-node parallelism to
process columns of each data partition concurrently.
The system was installed and all testing occurred at IBMs System and Technology Group Labs in
Poughkeepsie, New York between July and December 2007.
6
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
Figure 2: Project Jupiter System Landscape
3. 2 DATA AND QUERIES
Data for the Project Jupiter tests was drawn from two sources. The first part consists of data provided
by a SAP NetWeaver BI customer, a large U.S. manufacturer. It contains sales, billing, and delivery data
for 10 years. Separate InfoCubes were built for each subject area, for each year, yielding a total of 30
InfoCubes. A single MultiProvider is used to provide a common data view across the 30 InfoCubes.
The second part comes from the SAP NetWeaver BI Standard Application benchmark. It contains 48
months of sales and distribution data organized into 48 monthly InfoCubes. Another MultiProvider
provides a common view across these InfoCubes. Table 2 describes the data layout in more detail,
and shows how the size of two data models changes as the test scale grows. Note that as the
test scale increased, data was added to all InfoCubes, so the volume of data in each period grew
approximately uniformly.
Table 2: Project Jupiter Data Models
Number of Key Number of Records
Scenarios InfoCubes Dimensions Figures 5 TB 15 TB 25 TB
Manufacturing 30 14-16 58-93 3.2B 9.5B 16.3B
Benchmark 48 8 76 2.8B 8.5B 13.7B
Total 78 6B 18B 30B
Data was partitioned as it was loaded into the BI Accelerator. The number of partitions at each scale
point is shown in table 3. Note that despite the mismatch at the larger scale points between the degree
of data partitioning and the degree of parallelism, data is distributed uniformly across the nodes of the
BI accelerator to insure balanced workload across the entire parallel environment.
7
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
Table 3: BI Accelerator Data Partitioning
Test Scale BI Accelerator Nodes Data Partitions
5 TB 27 27
15 TB 81 40
25 TB 135 40
Queries executed against these data models involved thousands of variations on 14 complex reporting
templates. All queries accessed data via MultiProviders. They involved a fact table joined to large numbers
of dimensions with numerous predicates and multiple aggregation fields. Query predicates included
widely varying time periods, and location, product, and customer ranges that required processing
large volumes of data. Depending on query selectivity, individual reports frequently touched more
than one underlying InfoCube, averaging 1.6 InfoCubes accessed per report. Since the data volume
in each period grows as the test scale increases, the work performed by each query also grows fairly
linearly as the test scale increases.
3. 3 TEST PROTOCOLS
HPs LoadRunner, a widely used, commercial test driver software package, controlled test execution.
Tests were initiated from the query driver servers. Query concurrency was varied from a single test
stream to 800 concurrent query streams. Query streams cycled continuously through the thousands of
query instances with a two-second inter-query delay. For multi-user tests, each query stream executed
the sequence of queries starting from a different offset to insure that concurrent in-flight queries were
processing different data. Each test involved data from all InfoCubes.
WinterCorp reviewed and analyzed all test results and monitored test execution for the last week of
the test period.
Project Jupiter Test Results
To demonstrate SAP NetWeaver BI Accelerator performance and resource utilization at very large
scale, Project Jupiter executed separate tests of load performance, single-user query performance, and
multi-user query performance. In addition, resource utilization across the parallel BI Accelerator was
measured to assess the ability of the system to balance the workload across the parallel system as it was
expanded from two to ten blade chassis. The results of each test are analyzed below.
4.1 LOAD TESTS
The load performance tests measured the rate at which existing data in a SAP NetWeaver BI database
can be loaded and indexed into an attached parallel BI Accelerator system. Given the requirement of
rapidly growing data volumes and decreasing size of batch windows, load performance is a critical
element of successful use of the BI Accelerator.
For load testing, the time to load and index the entire NetWeaver BI database, including all 78 InfoCubes,
at each of the three scale points was measured. Load times ranged from 6 hours and 40 minutes at the 5
TB scale to 15 plus hours at the 25 TB scale. BI Accelerator load rates peaked in excess of 1 TB per hour,
at 1.28 TB per hour at the 25 TB scale (Figure 3).
8
4
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
9
Table 4: BI Accelerator Load Times
Test Scale Data Partitions Load Concurrency Elapsed Time
5 TB 27 5 6:40:30
15 TB 40 10 11:07:07
25 TB 40 10 15:04:29
The difference in load rates of roughly two times, between the 5 TB scale and the larger scale points,
appears to be due in large part to the different levels of concurrency used during the respective loads.
Using five concurrent load streams, rather than 10, while keeping the BI Server load parameters the same
at all three scales, resulted in slower load rates
at the 5 TB scale compared to the two larger
scale points.
On the other hand, the BI Accelerator load
process did not demonstrate linear scalability.
Given that the scale of the BI Accelerator
system grew in proportion to the growth in
the data volume, the question arises of why
the elapsed time to load the BI Accelerator
increases at all as the data volume increases.
The answer appears to be that the work
performed by the BI Server duri ng BI
Accelerator load has a significant impact on
the scalability of the load process. For all
Project Jupiter tests, the NetWeaver BI Server
was a 64 processor IBM P595 SMP server. As
the BI Server approached saturation, it limited
load throughput, despite the growth of the
BI Accelerator system (Figure 4). Reducing
the role that the BI Server plays in the load
process may be instrumental in improving
future BI Accelerator load scalability.
Nonetheless, load rates above 1 TB per hour
are more than 50 times greater than rates
observed in earlier load testing on a much
smaller, 4 processor BI Server system, and are
likely to be more than sufficient for periodic
updates to very large BI Accelerator databases
for the next several years.
4. 2 SINGLE USER TESTS
Project Jupiter tested basic query scalability of
the NetWeaver BI Accelerator through single
user test of each or the 14 query templates as
the system scaled between 5 TB and 25 TB. At
each scale point, all variations of each query
Figure 3: BI Accelerator Load Rates
Figure 4: BI Accelerator Load: Processor Utilization
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
template were executed consecutively through a single query stream, and the query response time was
measured. The average response time for each query was calculated (Figure 5).
Table 5: Single User Test Results
Test Scale Avg. Records Processed Avg. Response Time
5 TB 4.7 M .77 sec.
15 TB 15.8 M .90 sec.
25 TB 25.6M .92 sec
Analysis of the single user tests demonstrated very good responsiveness across all scales, with average
response times of less than 1 second overall, and below 2 seconds for nearly all of the queries, even
at the largest scale. Note that, on average,
each query processed millions of InfoCube
records, ranging from an average of 4.7
million records at the 5 TB scale, to 25.6
million at the 25 TB scale point. This shows
the power of the parallel BI Accelerator
engine.
Figure 5 also shows response time increasing
as the scale increased. At first look, this
appears to indicate less than linear scalability
across the three scale points tested. This
appearance is deceptive however, as the
volume of data processed by each query
grew more than linearly as the test scale
increased. When adjusted for this super-linear growth in the volume of data processed by each query,
the ability of the SAP NetWeaver BI Accelerator to scale linearly across the tested scale points becomes
clear (Figure 6). In Figure 6, linear growth in data volume and response time would be shown by a relative
growth rate of 1.0. As the chart shows, data
volume grew at a faster than linear rate as the
test scale increased, while the response time
growth rate stayed relatively flat.
4. 3 MULTI - USER TESTS
In order to test the scalability of the SAP
NetWeaver BI Accelerator under heavy load, a
series of multiuser tests were executed at each
of the three test scale points. The number of
concurrent query streams was varied between
100 and 800, with each stream running the
query sequence from a different start point,
so that each in-flight query was processing
different data. (At the 5 TB scale, test were only
executed through 400 concurrent streams.) A
short inter-query delay of two seconds was
10
Figure 5: Average Response Time by Query
Figure 6: Query Scalability
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
used for all multiuser tests, simulating tests of
thousands of concurrent users. To assess multiuser
scalability, both query throughput, measured as
queries per hour, and query response time were
measured.
At each tested scale, multiuser query throughput
peaked at more than 100,000 queries per hour at
200 concurrent query streams and then plateaued
at higher levels of concurrency (Figure 7). Beyond
200 concurrent query streams the NetWeaver
BI system reached saturation on processor
resources, first on the BI Server, and then on the
BI Accelerator at a somewhat later point (Figure
8). For this reason, analysis of BI Accelerator
scalability relied on the 200 concurrent query
streams test points exclusively.
Overall, the Project Jupiter tests demonstrated
linear scalability, for both query throughput and response time, through the 25 TB test scale (Table 6),
achieving in excess of 100,000 requests per hour and a flat response time of roughly 4 seconds. This is
especially significant since, similarly to the single user test results, the average volume of data processed
per query rose faster than linearly.
Table 6: Multiuser Test Results
Test Scale Records Processed Throughput Avg. Response Time
5 TB 5.9 M 100,404 Qph 4.5 sec.
15 TB 22.2 M 100,940 Qph 4.2 sec.
25 TB 36.8 M 100,712 Qph 4.2 sec
In addition, when viewed on a per query
basis, average response time per query also
demonstrated linear scalability (Figure 9), with
most queries averaging a response time of less
than 4 seconds.
The high throughput, low response time, high
concurrency, and linear scalability demonstrated
by the Project Jupiter multiuser tests makes a
strong case for the ability of SAP NetWeaver BI
Accelerator to meet multi-dimensional business
intelligence requirements at large scale.
4. 4 SYSTEM UTILIZATION
Optimal performance in a parallel system
depends on the balanced distribution of work
among all of the cooperating nodes. Otherwise
11
Figure 7: Multiuser Throughput
Figure 8: Multiuser Processor Utilization
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
overall system throughput is constrained
by the performance of the slowest,
usually busiest, node. In prior testing on
a single blade chassis, the NetWeaver BI
Accelerator demonstrated a high degree
of balance across up to 14 nodes.
In the current Project Jupiter tests, at
scales up to 135 nodes, or 10 times that of
the 2006 tests, processor utilization was
monitored to assess large-scale system
balance in the BI Accelerator. Overall,
the BI Accelerator system showed good
workload balance across all nodes as the
test system was scaled from 27 to 135
nodes (Figure 10).
The best balance was achieved at the 5 TB
scale, and deteriorated somewhat at the
larger scales as shown by the increase in
the standard deviation of average processor
utilization across nodes (Table 7). This
may be due to the difference between the
level of data partitioning and the degree
of paral lel ism in the larger scale test
systems, which increases the difficulty of
distributing work evenly across all of the
nodes, and to the increasing saturation of
the BI server as the workload increased at
higher test scales.
Table 7: BI Accelerator System Balance
Test Scale Avg. Processor Utilization (%) Standard Deviation
5 TB 71.20 3.93
15 TB 66.65 5.82
25 TB 59.89 5.61
4. 5 SUMMARY OF TEST RESULTS
In summary, the Project Jupiter test results highlight several important performance and scalability
characteristics of the SAP NetWeaver BI Accelerator.
The test system demonstrated the capability to load data into the BI Accelerator from an existing SAP
NetWeaver database, deployed on IBM DB2, at rates in excess of on terabyte per hour;
Single user query performance scaled quite well as the volume of data processed grew;
Multi-user tests demonstrated linear scalability in both throughput and response time across the
full range of data tested5 TB to 25 TB;
12
Figure 9: Multiuser Response Time
Figure 10: Processor Utilization by Blade
Large-Scale Testing of the SAP NetWeaver BI Accelerator on an IBM Platform
2008 Winter Corporation, Waltham, MA. All rights reserved. Duplication only as authorized in writing by Winter Corporation.
Testing at all scales demonstrated consistently high throughput at a rate of more than 100,000 user
reports per hour;
Multi-user tests also showed the ability to support an active user population numbering in the
thousands through high concurrency tests that achieved a concurrency of 100 to 800 query streams,
with optimal results at 200 concurrent query streams;
In all tests, the system demonstrated effective parallelization of query activity across up to 135 nodes,
using up to 10 IBM blade chassis.
These results demonstrate the scalability of the BI Accelerator running on an IBM infrastructure
to efficiently manage large databases up to 25 TB, and to effectively exploit very large parallel BI
Accelerator environments.
Conclusion
To meet the widespread demand for better business intelligence, SAP NetWeaver BI customers need to
scale their NetWeaver BI systems to unprecedented levels. This will entail rapid growth in data volume,
user concurrency, and diversity and complexity of the workload. The workload dimension is especially
dynamic, as the demand for data exploration and analysis increasing tilts the workload mix away from
standard reporting and toward unpredictable, ad hoc query processing.
The project Jupiter test program, by testing query and load performance of SAP NetWeaver BI Accelerator
on IBM systems at scales up to 25 TB of user data, with hundreds of concurrent query streams against
a variable workload, demonstrates an ability to meet the need to for more users to process much more
data in increasingly unpredictable ways, quickly and efficiently. These test results also provide strong
evidence of the capability of the BI Accelerator hosted on IBM systems to efficiently exploit hardware
resources well beyond a single blade chassis to deliver the class of scalable performance required to
meet these demanding business challenges for
13
5
A leading center of expertise in very large databases,
WinterCorp provides services in
consulting, research, architecture and engineering.
We help users and vendors understand their opportunities;
select their database and data warehouse platforms;
define and measure the value of their strategies, architectures and products;
plan, architect and design their implementations;
and manage their scalability, performance and availability issues.

Our focus is databases near, at and beyond the frontier of database scalability.
2008 Winter Corporation, Waltham, MA. All rights reserved.
Duplication only as authorized in writing by Winter Corporation.
411 WAVERLEY OAKS ROAD, SUITE 328
WALTHAM, MA 02452
781-642-0300
vi s i t us at www. wi nt er cor p. com

You might also like