You are on page 1of 26

Layered Scalable Architecture @ SAP 2.

Document Owner
Owners department
Version
Last Update

Martin Gembalzyk
IT Predictive Insight & Analytics
1.1
09.05.2014

Governed by (Accountable)

IT Predictive Insight & Analytics

Contents
1

LAYERED SCALABLE ARCHITECTURE (LSA) 2.0 ...................................................................3

1.1

Quick Reference .......................................................................................................3

1.2

Introduction .............................................................................................................3

1.3

Adjustments and Enhancements of LSA 1.0 .............................................................3

1.4

Concept ....................................................................................................................3

1.5

Layers at SAP ............................................................................................................4

1.6

Information and Data Content Areas .......................................................................5

1.6.1

Data Content Area ...............................................................................................................6

1.6.1.1
Support)
1.6.1.2
1.6.1.3
1.6.1.4

Minimum required EDL data flow for transactional data in Data Content Area (Single System
8
Twin Propagator Data Flow ..................................................................................................9
Minimum required EDL data flow for master data in Data Content Area .............................. 11
Data Content Area Layer Descriptions ................................................................................ 12

1.6.2

Information Content Area ................................................................................................... 18

1.6.2.1
1.6.2.2
1.6.2.3
1.6.2.4
1.6.2.5

Business Transformation and Integration Layer .................................................................. 19


Data Mart Layer ................................................................................................................. 19
Operational Data Store....................................................................................................... 19
Exchange Zone .................................................................................................................. 20
Virtual Reporting Layer ....................................................................................................... 20

1.7

Transformations and Lookups ................................................................................20

1.7.1

Transformations and Lookups in the Transactional Data Flow ............................................ 20

1.7.1.1
1.7.1.2

Allowed Lookups ................................................................................................................ 20


Remarks ............................................................................................................................ 21

1.7.2

Transformations and Lookups in the Master Data Flow....................................................... 21

1.8

Allowed Source and Target for Transformations....................................................22

1.9

Data flows between BWP, IWP and OPP ................................................................23

1.10

Real Time and Near Real Time Reporting ...............................................................25

1.11

Master Data Handling ............................................................................................25

1.12

Planning .................................................................................................................26

SCHEDULING AND PROCESS CHAINS ...............................................................................26

Layered Scalable Architecture @ SAP 2.0

1
1.1

LAYERED SCALABLE ARCHITECTURE (LSA) 2.0


Quick Reference

Quick Reference LSA


Propagation Scenarios

1.2

Introduction

As LSA 2.0 is derived from the corresponding SAP BW Reference Architecture we highly recommend
to get familar with it. Very good peresentations can be found here: LSA on SAP BW HANA 2011 + LSA
on SAP BW HANA 2012.
Also LSA 2.0 is closely related to the existing Solution Architecture Reference Guide, Developer
Guidelines, Authorizations and Content Strategy, which can all be accessed via the BI Guidelines page
in the corporate portal. It is mandatoy to get familar with all the guidelines in parallel.
Especially the guidlines around Naming Conventions, ABAP Programming Guidelines, Data
Replication and Information Lifecycle Management are highly relevent.

1.3

Adjustments and Enhancements of LSA 1.0


Streamlined terminology (Application Data Layer, Enterprise DataWarehouse Layer, etc.)
Mandatory Corporate Memory (CM) Layer for all data flows (replaces the decision tree of
LSA 1.0)
New Cross Business Transformation Layer (xBTL) in the Enterprise DataWarehouse Layer
(EDL)
Adjusted naming conventions (e.g. Quality and Harmonization Layer)
Twin Propagator approach in the EDL Propagation Layer
Multiple System Support (adaption and preperation of the NewBI platform strategy and
roadmap)
Detailed matrix on the allowed data flow, look up capabilities and DataStore Objects (DSOs)
types

1.4

Concept

LSA describes the design of service-level oriented, scalable, best practice SAP BW architectures
founded on accepted Enterprise DataWarehouse principles as introduced in Bill Inmon's Corporate
Information Factory (CIF) that has been adopted by SAP BW (see SAP BW Reference Architecture in
the Introduction above, both pictures below are taken from these presentations).
This is a SEVEN layered architecture. Every layer of this pattern has been designed to serve the
particular purpose.

Layered Scalable Architecture @ SAP 2.0

Below is an example for a architecture used for large BW implementation on enterprise level. This
architecture will help in boosting the overall performance of the system and making the
implementation flexible enough to adapt future enhancements.

1.5

Layers at SAP

LSA is built during the NewBI transformation projects to ensure right governance and
implementation technology.

Layered Scalable Architecture @ SAP 2.0

NewBI systems/platform consists currently of BWP, HCA and HCP. BWP (SAP BW) is clustered
into the Enterprise DataWarehouse Layer (EDL) and Application Data Layer (ADL) with common
Master Data Objects.
The extraction of data can be done with minimum effect on the Information Content Area
(Application Data Layer and Solution Layer). The purpose of having a LSA is to create a
Logical Single point of entry - Single point of the fact - Single point of distribution
Easier access to data
Business users and IT easily governed information assets to make IT a strategic asset that
drives strategy and execution
Everybody will be querying information based on the same set of data
Getting data out of source system
Storing all data for later use and reuse
Clean , Harmonize and Integrate data to be used for creating information
The LSA at SAP is separated into different layers:

1.6

Information and Data Content Areas

The layers are seperated into Information and Data Content Area's (ICA and DCA).
Data and Information Content Area are a logical grouping, it encompasses the ownership,
development authorization and are the main building blocks of the BW environment at SAP.
Data Content Area are linked to the DataSources and focused on data, the extraction, cleaning and
harmonization of data.
Information Content Area is linked to the topside of the BW system and focuses on the
transformation of data into information.

Layered Scalable Architecture @ SAP 2.0

The Data Content Area is physically implemented by the Enterprise DataWarehouse Layer (EDL), the
Information Content Area by the Application Data Layer (ADL) and Solution Layer.
Layers of the Information Content Area:
Virtualization Layer (M)
Data Mart Layer (P)
Business Transformation and Integration Layer (O)
Operational Data Store (H)
Exchange Zone (X)
Layers of the Data Content Area:
Cross Business Transformation Layer (C)
Propagation Layer (D)
Quality and Harmonization Layer (Q)
Corporate Memory (T)
Acquisition Layer (A)
Attention: The authorization Data and Information Onwer Profiles are related to the Information
Content Area. The Data Content Area has a specific profile. No reporting and SID enablement for this
layer.
The Data and Information Content Area gives the functional part of the scalability by splitting the
data and information into groups of data flows that can be processed in parallel.

1.6.1 Data Content Area


The Data Content Area is centered by data:

Layered Scalable Architecture @ SAP 2.0

Propagators are organized in Data Content Area's


Getting data out of source system
Storing all data for later use and reuse
Clean, harmonize and integrate the data to be used for creating information
Special InfoObjects (/EDL/ namespace) without master data flags (attributes, text and
hierarchy) are used (InfoFields)
One DCA is linked to one Propagator (one or more DSOs) and identifies one or more
DataSource that is used to populate one Propagator (more if it cannot be avoided)
If the EDL should also provide corporate business logic a xBTL (Cross Business Transformation
Layer) object can be provided optionally after the propagation layer thus supplying
Informations Content Areas with data that have already validated and corporate business
logic.
The DCA is using the naming convention prefix "/EDL/"
Only InfoObjects in the /EDL/ namespace are allowed in DataStore Objects (! No Exception !)
- references in InfoObjects to non-/EDL/ InfoObjects are for forbidden
All the DataStore Objects (DSOs) in the DCA should be created as Write-Optimized or
Standard DSOs (if a Standard DSO is required it must not be HANA optimized). See also the
specific layer descriptions below for a detailed guidance on this. As the DataStore Objects are
intended not to be used for reporting the flag "SID Generation" in the DSO settings has to be
set to "Never Create SIDs".
Partitioning: if more than 100 Mio. records are expected in a DataStore Object partitioning
is mandatory. This could be implemented as a Semantic Partitioned Objects (SPO).
Technical fields: Acquisition Layer, Corporate Memory and Propagation Layer are containing
all fields of the DataSource + technical fields that have to be enriched (like source system ,
request ID and load timestamp; please see Acquistion Layer Description below for a detailed
explanation)
The next table describes in detail with types of DataStore Object are allowed to be used in each layer:

Layer

DataStoreObject
Layer Implementation
Implementation

Acquisition
Layer

Mandatory

Optional

Default
DataStoreObject
Type

Exceptional
DataStoreObject
Type

Standard

No exception

Corporate
Memory

Mandatory
(Transactional Data:
always / Master Data: Mandatory
see details in the Layer
Description)

Write-Optimized
(No semantical
key!)

No exception

Quality and
Harmonization
Layer

Optional

Mandatory

Write-Optimized
or Standard

No exception

Propagation
Layer

Mandatory

Mandatory

Write-Optimized
(No semantical
key!)

No exception

xBTL

Optional

Mandatory

Standard

No exception

Layered Scalable Architecture @ SAP 2.0

1.6.1.1

Minimum required EDL data flow for transactional data in Data Content Area (Single
System Support)

Sinlge System Support means that Data and Information Content Areas are residing in the same
system.
Here the minimum required data flow is described *w/o* usage of the Cross Business
Transformation Layer:

Here the minimum required data flow is described *with* usage of the Cross Business
Transformation Layer:

Layered Scalable Architecture @ SAP 2.0

Here we can see that certain rules have to be applied:


Persistency objects in the (DSOs) have to be shielded by Inbound and Outbound
InfoSources
Acquisition, Corporate Memory and Propagation Layer are mandatory (Corporate
Memory with mandatory persistency)
Quality/Harmonization and xBTL Layer are optional (if implemented having mandatory
persistency)
Propagator Layer implementation as a Twin Propagator (Delta Propagator as WriteOptimized DSO with the latest request, Full Propagator as Write-Optimized DSO with data
only when request by an application)
1.6.1.2

Twin Propagator Data Flow

The delta propagator is usually used to deliver the periodic deltas to the connected application(s).
The full propagator is used for the initialization of a new application or the re-initialization of an
existing application on explicit request from the corporate memory.

Layered Scalable Architecture @ SAP 2.0

The next picture describes the data flow for both cases (periodic delta and historical data reload on
request):

Minimum required EDL data flow in Data Content Area (Multiple System Support)
Multiple System Support means that Data and Information Content Areas are deployed in separate
SAP BW instances. The Export DataSource and the shielding InfoSource in the target system can also
be considered as part of the Data Content Area (EDL).
Here the minimum required data flow is described *w/o* usage of the Cross Business
Transformation Layer:

Here the minimum required data flow is described *with* usage of the Cross Business
Transformation Layer:

Layered Scalable Architecture @ SAP 2.0

Here we can see that it basically follows the same rules as in the Single System Support scenario.
1.6.1.3

Minimum required EDL data flow for master data in Data Content Area

The data flow for master data differes partitially from the transactional data flow. The following
picture describes the minimum required data flow:

Here we can see that certain rules have to be applied:

Layered Scalable Architecture @ SAP 2.0

Acquisition and Propagation Layer are mandatory


Quality/Harmonization Layer is optional (if implemented having mandatory persistency)
Corporate Memory is optional (if implemented having mandatory persistency and
InfoSource Shielding)
Propagator Layer implementation as a Master Data Propagator (InfoSource Shielding can
be by-passed under given circumstandes, see Propagation Layer description below)
1.6.1.4

Data Content Area Layer Descriptions

In the Data Content area, the data model focus on getting data from the DataSource to the
Propagators (or xBTL). During this flow the data should be cleaned and harmonized.
In the DCA's the following layers exists:
Acquisition Layer (Prefix A)
The inbound part of the Acquisition Layer corresponds to the PSA objects from the source
system. The purpose of this layer is to do the mapping between fields from the DataSource
to InfoObjects in an Outbound InfoSource + adding technical information. This layer is
mandatory.
Corporate Memory (Prefix T)
Store all request from all DataSources the life insurance. This layer is mandatory.
Quality and Harmonization Layer (Prefix Q)
Alignment of data to common standard and corporate rules. This layer is optional.
Propagation Layer (Prefix D)
Supplies digestible and unflavored data to create information applications in the Information
Content Area. The layer is mandatory.
Cross Business Transformation Layer (Prefix C)
Supplies digestible and unflavored data with central corporate business logic to create
information applications in the Information Content Area. This layer is optional.
1.6.1.4.1 Acquisition Layer
The inbound part of the Acquisition Layer corresponds to the PSA objects from the source system.
The purpose of this layer is to do the mapping between fields from the DataSource to InfoObjects in
an Outbound InfoSource + adding technical information.
It serves as a fast inbound layer accepting data 1:1 for temporary storage
All fields of the DataSource must be mapped to a corresponding (naked) InfoObject in the
Acquisition Layer Outbound InfoSource
No transformation of data in the Acquisition Layer is allowed only the routines needed to
add the technical fields and the mapping between field names and InfoObjects. If the
DataSource are of questionable quality use fields of type CHAR in the DataSource and make
the quality check in the Quality and Harmonization Layer:
o If you are expecting questionable dates in your source data, the check of the dates
should be done in the Quality and Harmonization Layer. Make sure that your
InfoObject is not referencing the 0DATE as this will cause a dump.
o Upper/Lower case If your mapping in the Acquisition Layer can have both upper
and lower case characters flag the InfoObjects as Lower case No SIDs are
generated!
The "no transformation" of data rule also means that you may not flag any keys in the
definition of the Outbound InfoSource. This is the "No keys in the InfoSource" rule!
Main rule is to have only the Outbound InfoSource placed in the Acquisition Layer.

Layered Scalable Architecture @ SAP 2.0

Special Case: Alternatively the Outbound InfoSource could be replaced by a Standard DSO shielded
by an Outbound InfoSource. A DSO should only be used if the extractor only deliveres full data loads.
If a corresponding full load deliveres > 1 Mio records the usage of a Standard DSO shielded by an
Outbound InfoSource is mandatory (please consider there a consequent ILM in your process
chains, see Information Lifecycle Management Guidelines). In this case the Standard DSO is
leveraged to calculate the deltas. Other uses cases as well as the utilization of a Write-Optimized DSO
has to be aligned with Architecture. The transformation into a DSO should still only be 1:1 with the
addition of technical information.
As an Outbound InfoSource has to be placed in the data flow after the DataSource (or DSO) the
created InfoSource has to add the following technical information in the Outbound Transformation
of the DataSource (or DSO):
Original Source (technical name of DataSource and Source System + Source System ID) > provides unique determiniation of the DataSource and Source System
o /EDL/CS01DATS Origin: DataSource
o /EDL/CS02SSYS Origin: Source System
o /EDL/CS03SSID Origin: Source System ID
Original DTP Request timestamp, date and time (entering the BW system) -> provides
technical uniqueness and explicit identification of source data
o /EDL/CS04LDAT Origin: DTP Request Load Date
o /EDL/CS05LTIM Origin: DTP Request Load Time
o /EDL/CS08TMSP -- Origin: DTP Request Load Timestamp (short)
Original PSA Request (entering the BW system) -> provides technical uniqueness and explicit
identification of source data
o /EDL/CS06LREQ Origin: PSA/ODS Source Request (GUID)
o /EDL/CS07LRNO Origin: PSA/ODS Source Request (SID)
Original DTP Request, Data Package and Record number (entering the BW system) ->
provides technical uniqueness and explicit identification of source data
o /EDL/CS09DPID -- Origin: DTP Request Data Package Number
o /EDL/CS10RECN -- Origin: DTP Request Data Package Record Number
o /EDL/CS11DTPG -- Origin: DTP Request (GUID)
o /EDL/CS12DTPS -- Origin: DTP Request (SID)
Adding this information HAS TO BE DONE in the outbound transformation of the DataSource
(or DSO).
Routing this information to the Acquisition, Q&H, Propagation and Corporate Memory
Layer is MANDATORY.
Routing this information to the xBTL and ADL IS NOT NEEDED.
Main purpose:
- Identification of request that needed to be reloaded from the Corporate Memory into the
Propagator
- Supporting any other kind of reloading activities from the EDL
Adding the Data Package and Record Number is crucial if the upper data flow consists of a
SPO in the Corporate Memory Layer. By constructing a semantical key that is MANDATORY
in this case with the request, data package and record number undesired aggregations in
transformations are getting avoided. It is recommended to use the PSA Request (DTP
Request is also possible, advantage of using the PSA Request is an end-to-end identification).
Special Case: "DataStore in Acquistion Layer to calculate deltas" - Extractors only provide you
with full data loads
o Here place in the transformation from the DataSource to the Acquisiton Layer
DataStoreObject only the "Original Source" fields and derive the values with the help

Layered Scalable Architecture @ SAP 2.0

o
o

of the Data Acquisition Layer Routine Library methods. The other fields you need to
calculate in the outbound transformation of the Acquistions Layer DataStore Object.
"Original Source" fields need to have a 1:1 mapping.
"PSA Request" fields: in this case it contains the "DataStoreObject Activation
Request" GUID/SID.
Only in this case delta requests are calculated in the Acquistion Layer
DataStoreObject.

Please Note: no (other) Logic between Acquisition layer outflow InfoSource and Corporate
Memory and back
See Tool Box to get a detailed instruction to get the mandatory technical information (Data
Acquisition Layer Routine Library). Please read the guideline carefully, do not just stick to existing
examples in the system. Additional information around special case "DataStore in Acquistion Layer
to calculate deltas" in combination with HANA InMemoryOptimized DataStoreObjects can be found
here: Change Log Compression.
1.6.1.4.2 Corporate Memory
Corporate Memory requires the DataSource(s) mapped to a unique DSO in the Corporate Memory
Layer and all requests must be stored in this DSO (only Write-Optimized DSOs are allowed).
No transformation of data is allowed loading data into the Corporate Memory Layer from the
Acquisition Layer Outbound InfoSource. To create the data flow into the Corporate Memory Layer as
well as the flow back into the Acquisition Layer Outbound InfoSource a Corporate Memory Inbound
InfoSource as well as Corporate Memory Outbound InfoSource has to be used (shielding of the
DSO).
Also for master data flows usage of the Corporate Memory is mandatory if source data can be
deleted (e.g. FlatFile DataSources).
1.6.1.4.3 Quality and Harmonization Layer
In this layer data will be checked for quality and harmonization according to corporate standards.
This optional layer has to be implemend by the adaption of a Standard DSO (optional: shielding by
Inbound and Outbound InfoSources). All deviations (e.g. usage of a Write-Optimized DataStore
Object) have to be alligned with Architecture.
The source is the Data Acquisition Layer and the Corporate Memory Layer.
Flavours of the Quality and Harmonization Layer:
Technical harmonization
Format, length, etc.
Simple format check; Text field, date, etc.
Upper case
Master data referential integrity
Master data integration into one single model
Compounding, concatenation, etc.
Best record
Common transformation, adding non-application specific information, etc.
Amount in different currencies
Quantity in different unit

Layered Scalable Architecture @ SAP 2.0

Common master data derivation


1.6.1.4.3.1 Special Case: Multiple time-dependent sources into time-dependent DSOs
Majorly used in HR contents
NOTE: When there are multiple data sources to be connected to the same DSOs Time dependent
staging is required if one OR more of the DataSources is time-dependent.
This scenario is specifically
developed for HR but can also be
used in other case where you
have to following scenario
(implementation requires usage
of Standard DSOs):
DSO #1 Time dependent
data relating to a key
DSO #2 Time dependent
data relating to a key
DSO #3 NOT Time
dependent data relating
to a key and you will like
to merge these into ONE
time dependent DSO
A solution (as this is a general
approach you might able to find
a more appropriate solution in
specific
cases) is as follows:
The key in the
source DSO corresponds
to that of the target DSO
The InfoObjects
0DATETO and
0DATEFROM is used as
the intervals 0DATETO
is part of the key
All data fields of the
source DSO will be used
in the target DSO
Only full loads are
permitted from the
source DSO to the target
DSO
All rows of a key
member must be in the
same data package (use
sematical grouping in

Layered Scalable Architecture @ SAP 2.0

DTPs)
Create an InfoSource
that match the
result DSO 1:1 but
without any key. Map
all DSO into this
InfoSource
You cannot map from
one InfoObject to
another. All mappings
must go 1:1. Only fields
present in the
source DSOs are mapped
If you have a non timedependent DSOs you
must map the fields for
DATETO and DATEFROM
to constant values of
19000101 and
99991231
In the transformation between
the InfoSource and the
Target DSO you must place the
following code.

CALL METHOD /edl/cl_core_time_merge=>do_s_time_merge


EXPORTING
ir_request
= p_r_request
CHANGING
ct_source_package = SOURCE_PACKAGE.

This code will lookup the current


content for all keys" for that
specific key in the target DSO.
Merge the content of the
package data with the existing
data from the target DSO,
removing keys based on the
0DATETO (RECORDMODE=D)
that are not recreated.
When loading you must load one
source DSO to the target DSO
and activate the data. Then the
next DSO, activate the data
continue until no more data
should be updated in the
target DSO
1.6.1.4.4 Propagation Layer
Supplies digestible, unflavored data, etc. as one possible source for Information Content Area's (the
other possible source is the optional xBTL). In this layer data from different DataSources with
identical information is integrated into one propagator.

Layered Scalable Architecture @ SAP 2.0

Digestible
Ready to consume
Unflavored
o No application specific transformations
o Data should give the possibility to compare and verify with the source system
Integrated
o Common semantics
o Common values
o Clean
o All sets of data should be disjuncted - no intersections between Data Content Areas
should exist
Harmonized data
o Smoothing data
o Technically unified values (e.g. compounding)
Trimmed to fit DataSources and data persistencys to reduce data complexity for applications
by
o Extending data by looking up information, which applications frequently ask for
o Merging different but highly related DataSources and store data in a single
propagator, if application always or frequently request them together
o Collecting data from the same (or similar) DataSource but from different source
systems to less or a single source system independent propagator
For using the Propagation Layer consider the following:
Transactional Data Flow:
Consist of Write-Optimized DSOs shielded by Inbound and Outbound InfoSources that gives
a unified data transfer behavior
The Twin Propagator approach is mandatory for transactional data. The Delta Propagator
contains only requests which have not yet been updated to all connected applications, the
Full Propagator is filled only upon request when an application needs a reload from
Corporate Memory.
Data must be stored at the level of granularity given by the DataSource(s). No information
originally deliverd from the DataSource must be lost on its way to the Propagators (this
implies the necessity of Write-Optimized DSOs without any semantical key fields).
Data is integrated; Company Code "SAP AG" in propagator #1 and Company Code "SAP AG"
in propagator #2 is in both cases identified as 0001
Master Data Flow:
Consist of Standard DSOs that gives a unified data transfer behavior
For master data entities that are to be considered as a relevant business object (e.g. material
master, employee master, profit center) or for high volume master data (> 1 Mio. records)
shielding by Inbound and Outbound InfoSources is mandatory
For master data entities that are to be considered as a text, organizational or control master
data (e.g. company code, industry code) and a data volume between 1000 and 1 Mio.
records shielding by Inbound and Outbound InfoSource is optional
For master data entities < 1000 records shielding by Inbound and Outbound InfoSource
is not reasonable
Data must be stored at the level of granularity given by the DataSource(s)

Layered Scalable Architecture @ SAP 2.0

1.6.1.4.5 Cross Business Transformation Layer


This layer can be applied if a central business logic should be provided to areas of the Information
Content. This is an optional layer that has to be realized by a Standard DSO shielded by Inbound and
Outbound InfoSources. In the data flow it follows the propagation layer and is getting the interface
to the business transformation/integration layer of the Informations Content Area consumers.With
LSA 1.0 it was not allowed to apply business logic in the EDL, with xBTL it is possible. It is part of the
content strategy to mainly deploy corporate standards as this a pre-requisit for a consistent reporting
across all solutions on corporate KPIs.
The implementation of a XBTL only because of technical reasons is not allowed.

1.6.2 Information Content Area


The following layers exists in the Information Content Area:
Business Transformation and Integration Layer (Prefix O)
Data from multiple Data Content Areas are combined to create Information. No
reporting is done directly here.
Data Mart Layer (Prefix P)
Reporting specific object making the information from Business Transformation/Integration
Layer available for reporting. No reporting is done directly here.
Operational Data Store (Prefix H)
Operational data store, also including RealTime and Near RealTime objects. No reporting
is done directly here.
Exchange Zone (Prefix 'X')
Reserved for data export to external system (not to be used for data export to the
operational and corporate SAP BW system that are already part of the SAP BW platform)
Virtual Reporting Layer (Prefix M)
Objects from the Data Mart Layer and Operational Data Store are combined and made
available for reporting. Its allowed to combine information from multiple Information
Content Areas of the ADL (Application Data Layer) in the Informations Content Areas of the
Virtual Reporting Layer (Solution Layer).
Transformation of data into information is done in an Information Content Area. You may access all
Propagators (xBTL) of all Data Content Areas. No data flow between Information Content Area's is
allowed.
When creating an Information Content Area you should reuse any existing propagators (or xBTL):
If there is a need for more fields in the propagator the Data Content Area has to be altered.
If the Data Mart DataSource from the leading system is already used and no staging is done;
you must do the staging and changes must be done so that the Information Content Area can
consume the DataContent Area
If the data is not already extracted, you should create a new Data Content Area for this
Always use the "Cube Qualifier" if the ADL object gets incluced into a
MultiProvider/CompositeProvider

Layered Scalable Architecture @ SAP 2.0

Partitioning:
Implementation as Semantic Partioned Object (SPO) or own partioning strategy is
mandatory for a data volume > 100 Mio. records (for ILM purposes, query performance,
etc.)
1.6.2.1

Business Transformation and Integration Layer

This is where data are turned into information.


Data from multiple data content areas are combined. Data gets transformed into information
following the guidelines given by the business. Whether or not to use the Business Integration Layer
(optional) is decided by the reporting needs and the complexity of the transformation of data from
the Propagation Layer or xBTL. DSOs of the Business Integration Layer have to be created as
Standard DSOs with shielding Inbound and Outbound InfoSources. As the DataStore Objects are
intended not to be used for reporting the flag "SID Generation" in the DSO settings has to be set to
"Never Create SIDs". Between the Propagation Layer (or xBTL) and the Business Integration Layer a
Business Transformation Layer is mandatory that is virtual (InfoSource). The Business
Transformation Layer is always mandatory and only virtual (InfoSource), even if no Business
Integration Layer is used. So the data supplier of the Business Integration Layer is always the
Propagation Layer or xBTL (not the Corporate Memory Layer).
1.6.2.2

Data Mart Layer

The Data Mart (along with the objects in the Virtual Reporting Layer) is build with an eye on the
reporting needs. All reporting requirements must be met in the modeling of the Data Marts.
In modeling the Data Mart you must consider:
*KPIs to be reported
*Granularity of the information
Also performance has to be considered. Please consider also the Pro's und Con's around using
InfoCubes or DSOs (see this article here for InMemory HANA Optimized InfoCubes and DataStore
Objects). As only Standard DataStore Objects are allowed and reporting usage is possible consider to
set the flag "Create SIDs" either to "During Reporting" or "During Activation" (in case of using the
setting "During Activation please consider corresponding guidlines on DataStore Object Batch
Settings). InfoSets are technical possible but not recommended, CompositeProvider technology
should be used instead.
The Data Mart Layer could either be connected in the data flow before to a InfoSource of the
Business Transformation Layer or a shielding Outbound InfoSource of the persistend Business
Integration Layer.
InfoCubes are not allowed as a source in the data flow.
No query creation is allowed on this layer, Persistency Objects in the Data Mart Layer can also be
shielded by InfoSources.
Objects from this layer have to be included into a MultiProvider or CompositeProvider for reporting.
1.6.2.3

Operational Data Store

Contains Real Time, Near Real Time and Operational Data. For operational data Local Provider in a
SAP BW Workspace and Standard/Direct Update DSOs are possible. For Standard DSOs consider to

Layered Scalable Architecture @ SAP 2.0

set the flag "Create SIDs" either to "During Reporting" or "During Activation" for Direct Update
DSOs "During Reporting".
For Real Time and Near Real Time VirtualProvider and HybridProvider are possible.
No query creation is allowed on this layer, InfoSource shielding is possible.
Objects from this layer have to be included into a MultiProvider or CompositeProvider for reporting.
1.6.2.4

Exchange Zone

Contains data for external system export (systems that are not part of the NewBI system landscape).
This layer could only include Open Hub Destinations, here no InfoSource shielding is relevant.
No reporting is allowed on this layer, also no inclusion into a MutliProvider or CompositeProvider is
allowed.
Data supply only allowed from the Business Transformation/Integration/or Data Mart Layer.
1.6.2.5

Virtual Reporting Layer

Reporting (query creation) is only allowed on MultiProvider or CompositeProvider. MultiProvider


and CompositeProvider are the only objects that are allowed in the Virtual Reporting Layer. The
Virtual Reporting Layer is also considered as the Solution Layer. The MultiProvider and
CompositeProvider with its depending reporting elements can be considered as solutions.

1.7

Transformations and Lookups


Generel rules for lookups:
o No lookup allowed to objects of the Acquistion and Qualtity & Harmonization Layer
o No lookup allowed between BIG Areas of the ADL
o No lookup allowed to Master Data (InfoObjects) of the ADL
o Lookups to the xBTL should be prefered to the Business Integration/Transformation
Layer
Lookup Implementation Guideline:
o Lookup operations have to be wrapped into ABAP OO methods
o It is mandatory to document any lookup operation in the Technical
Specification, COM Document and all other mandatory documentation
o Within the method own logic can be implemented or methods from the Toolbox can
be re-used
No "cross" loads between content areas in the EDL and ADL are allowed. Data loads can only
happen "bottom-up" in the same content area of the EDL and ADL (exception is for sure a
transformation from the Propagation Layer/xBTL in EDL to the "O" Layer in the ADL).
Do error handling (see related guideline)
Document and explain transformation that are not 1:1 or simple lookups
No code snippets above 20 lines of code, use ABAP OO methods

1.7.1 Transformations and Lookups in the Transactional Data Flow


1.7.1.1

Allowed Lookups

1. Lookup on Master Data (1)


Source: Master Data Propagator (no Twin Propagator)
within Transformations into the Business Integration/Transformation Layer
within Transformations into the xBTL
within Transformations into the Propagation Layer

Layered Scalable Architecture @ SAP 2.0

2. Lookup on Transactional Data (1)


Source: Full Propagator (of Twin Propagator)
within Transformations into the Business Integration/Transformation Layer
within Transformations into the xBTL
within Transformations into the Propagation Layer
Source: xBTL (2)
within Transformations into the Business Integration/Transformation Layer
1.7.1.2

Remarks

(1) = Lookups across EDL areas are allowed


(2) = If not implemented the Propagation Layer can be used instead
The following picture displays an example data flow on the possible derivations:

1.7.2 Transformations and Lookups in the Master Data Flow


In general no lookups are allowed in the master data flow (materialized as an access to an external
DSO in the transformation). Consideration of other master data has to be implemented within the
scope of a master data integration and harmonization within the Quality & Harmonization Layer.
Exeptions have to be alligned with Architecture (reason: avoidance of loading dependencies that can
not modelled within the data flow of the Sub Process Chain).

Layered Scalable Architecture @ SAP 2.0

1.8

Allowed Source and Target for Transformations

The following matrix is showing the allowed data flow in the LSA from a Transformation perspective.
This is also reflected by the related authorization profiles for developing content (profile
BI:EDL:TEAM + Data Owner and Information Owner profiles).
Data flow that is already existing and has been created following the LSA 1.0 guidelines can still be
changed, but a full adaption of the LSA 2.0 should be executed nethertheless.
The allowed sources and targets for Transformations and DTPs can be derived from this matrix.
Transactional Data Matrix

Master Data Matrix

Layered Scalable Architecture @ SAP 2.0

The XLS version can be found here: Allowed Data Flow 1.0

1.9

Data flows between BWP, IWP and OPP

The overall BW architecture concept sees BWP as central staging system for IWP and OPP:
1. IWP and OPP get their data only from BWP. A direct connection from other source systems
(like ISP or ICP) to IWP and OPP is not allowed. All data loaded into IWP and OPP must be
staged through BWP's EDL.
2. Exceptions can be made for real-time/virtual accesses to source system data but those
scenarios have to be discussed with the Architecture Team in advance. The general
recommendation is to build real-time scenarios in BWP.
3. Data flows from IWP or OPP back to BWP are strictly forbidden. Applications that provide
data for other applications (through EDL) have to be built in BWP. Exceptions are one-time
loads of historical data if an application is migrated from IWP/OPP to BWP.
Rules 1 and 2 apply only to new applications in IWP and OPP. Existing applications and contents may
continue to load from other source systems than BWP as long as there is no migration planned.
Temporary exceptions to rules 1 and 2 have been approved by Jrgen Habermeier for the IWP
Transformation Program (IWP sunset preprations). The following IWP DataSources may be
connected to BWP, data loads have to follow BWP's architecture guidelines (usage of EDL):
IWP DataSource
8DPAPAEA
8DPAPAEN

Limited until
MD7.2014
MD7.2014

Solution Architect
Nicolas Cottin
Nicolas Cottin

MD7.2014
MD7.2014
MD7.2014

Nicolas Cottin
Nicolas Cottin
Nicolas Cottin

MD7.2014

Nicolas Cottin

MD7.2014

Nicolas Cottin

MD7.2014

Nicolas Cottin

MD7.2014

Nicolas Cottin

MD7.2014

Nicolas Cottin

MD7.2014

Nicolas Cottin

8PPCA_P01

Description
Headcount (Adjustments)
Headcount (Closed Months
Interim)
Headcount (Closed Months)
Headcount (Open Months)
Personnel Actions
(Adjustments)
Personnel Actions (Closed
Months Interim)
Personnel Actions (Closed
Months)
Personnel Actions (Open
Months)
Positions and Vacancies
(Closed Months Interim)
Positions and Vacancies
(Closed Months)
Positions and Vacancies
(Open Months)
PCA: Plan Transaction Data

MD8.2014

8PROLFCS01

Rolling Forecast@SAP: CoS

MD9.2014

8PROLFFI01

MD9.2014

8PROLFHR01

Rolling Forecast@SAP:
Finance
Rolling Forecast@SAP: HR

8PROLFIV01

Rolling Forecast@SAP:

MD9.2014

Andreas
Weisenberger
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,

8DPAPAE1
8DPAPAE2
8DPAPACA
8DPAPACN
8DPAPAC1
8DPAPAC2
8DPAPAVN
8DPAPAV1
8DPAPAV2

MD9.2014

Layered Scalable Architecture @ SAP 2.0

IWP DataSource

Description
Investment
Rolling Forecast@SAP:
Markets
Rolling Forecast@SAP:
Material
Rolling Forecast@SAP: Sales

Limited until

MD9.2014

ZROLF_EXT_PROLFCS01

Rolling Forecast@SAP: Sales


(extended)
PROLFCS01 RDA

ZROLF_EXT_PROLFFI01

PROLFFI01 RDA

MD9.2014

ZROLF_EXT_PROLFHR01

PROLFHR01 RDA

MD9.2014

ZROLF_EXT_PROLFIV01

PROLFIV01 RDA

MD9.2014

ZROLF_EXT_PROLFMA01

PROLFMA01 RDA

MD9.2014

ZROLF_EXT_PROLFMK01

PROLFMK01 RDA

MD9.2014

ZROLF_EXT_PROLFSW01

PROLFSW01 RDA

MD9.2014

ZROLF_EXT_PROLFSW02

PROLFSW02 RDA

MD9.2014

ZROLF_EXT_ROLFADM31_ATTR

ROLFADM31 RDA

MD9.2014

ZROLF_EXT_ROLFADM31_TEXT

ROLFADM31 RDA

MD9.2014

8DYBPRA01

YCRM_BP Attributes DL

MD10.2014

8PMKISMA01
8PMKISKPIP
80CRM_MKTELMH

Planning data Budget


Planning data Forecast
CRM Marketing Element

2015
2015
2015

8PROLFMK01
8PROLFMA01
8PROLFSW01
8PROLFSW02

MD9.2014
MD9.2014
MD9.2014

MD9.2014

Solution Architect
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Iulia Maurer,
Chandramouli
Reddy
tbd
tbd
tbd

Any further exceptions have to be aligned with Jrgen Habermeier, Bert Glaser and BI Platform
Team.

Layered Scalable Architecture @ SAP 2.0

1.10 Real Time and Near Real Time Reporting


Real Time (RT) and Near Real Time (NRT) reporting implementations are strongly depending on the
available DataSource(s) and System Environment. Therefore no explicit guideline can be given here.
Each scenario has to be alligned with architecure. The following matrix is giving on overview on
available technologies and BW* support.
Technology

Implementation
Examples in BWP

Supported by BW* Landscape

Real Time Data Acquistion


(RDA)

Under Development: PCA


Can be implemented in both cases (Single
Realtime
System Support or Multiple System Support)
Planned: CRM
as open request are bound to one system.
Opportunities (ADRM)

HybridProvider with Direct


Access

Should be modeled in the Reporting System


(Multiple System Support). Single System
Support for Finance possible.

Under Development: COPA Realtime

VirtualProvider based
on DTP

Should be modeled in the Reporting System


(Multiple System Support). Single System
Support for Finance possible.

Under Development: COPA Realtime

VirtualProvider based on
Function Module

Should be modeled in the Reporting System


(Multiple System Support). Single System
Support for Finance possible.
VirtualProvider with access to HANA SbS
Systems is not recommended.

Outdated: CRM
Opportunities (ADRM)

2nd Database Schema


(available as of SAP BW on
HANA 7.3 SP8)

Currently not supported, evaluation ongoing -

Other

Allignment with Architecture absolutely


mandatory.

1.11 Master Data Handling


1. There are special Information Content Area's for Corporate Master Data (InfoArea under root node
'MD'). You are not allowed to create own local, solution-specific Master Data Objects in the ICA, if
there is a Master Data Object in this area available. If there is no existing, it has to be evaluated if a
Corporate Master Data Object should be implemented.
As all InfoObjects have cross usage, the master data data flow cannot be put into the same InfoArea
hierarchy as the transactional data flow. The MD InfoArea for Master Data has to be used. Both
InfoObject as DataProvider and corresponding data flow must be place below this node AND not
within the transaction InfoArea (BIG areas for the Application Data and Solution Layer) hierarchy.
2. The Master Data Propagators are used to load the InfoObjects in the Special Information Content
Areas for Corporate Master Data (InfoArea under root node 'MD'), see 1. above. The Master Data
Propagators as any other Content Area is unique.

Layered Scalable Architecture @ SAP 2.0

1.12 Planning
1. Planning applications are created in the SAP BW systems and data is created in the Architected
Data Mart Layer. The general rule for data flow also applies here, e.g. data created in the planning
applications cannot flow down into the Business Transformation/Integration Layer. There is no
separate Planning Layer. All planning solutions (DSOs and Infocubes) have to be developed in O and P
Layer
2. Planning data is corporate data and needs to follow the general flow of data, thus it must enter the
Acquisition Layer and follow the rules for creating content (if not planned directly in SAP BW).
3. Corporate querying of planning data is done using the designed data flow in SAP BW. You may only
use the Planning InfoCube (via the Virtual Layer)) or Aggregation Level for querying as part of the
planning application. The virtualization layer can include data from several applications which
includes planning applications
4. If there are providers in the business integration layer (O) or data mart layer (P) which have data
required for planning in the required granularity etc. This data can be used in planning models and be
read with planning functionality.
5. If a planning solution requires data with another granularity or additional business logic which is
currently not available they should realign with the responsible product/solution architect in order to
check whether an existing content can be enhanced or a new content has to be created for this
planning solution.
6. If a content requires planning figures from another content these planning figures have to be
provided through the EDL Layer how it is defined in this guideline. Therefore the open hub has to be
used to create a table which then can be used in the EDL acquisition layer.
2

SCHEDULING AND PROCESS CHAINS

Details can be found in the Data Replication Guideline.

You might also like