Professional Documents
Culture Documents
Document Owner
Owners department
Version
Last Update
Martin Gembalzyk
IT Predictive Insight & Analytics
1.1
09.05.2014
Governed by (Accountable)
Contents
1
1.1
1.2
Introduction .............................................................................................................3
1.3
1.4
Concept ....................................................................................................................3
1.5
1.6
1.6.1
1.6.1.1
Support)
1.6.1.2
1.6.1.3
1.6.1.4
Minimum required EDL data flow for transactional data in Data Content Area (Single System
8
Twin Propagator Data Flow ..................................................................................................9
Minimum required EDL data flow for master data in Data Content Area .............................. 11
Data Content Area Layer Descriptions ................................................................................ 12
1.6.2
1.6.2.1
1.6.2.2
1.6.2.3
1.6.2.4
1.6.2.5
1.7
1.7.1
1.7.1.1
1.7.1.2
1.7.2
1.8
1.9
1.10
1.11
1.12
Planning .................................................................................................................26
1
1.1
1.2
Introduction
As LSA 2.0 is derived from the corresponding SAP BW Reference Architecture we highly recommend
to get familar with it. Very good peresentations can be found here: LSA on SAP BW HANA 2011 + LSA
on SAP BW HANA 2012.
Also LSA 2.0 is closely related to the existing Solution Architecture Reference Guide, Developer
Guidelines, Authorizations and Content Strategy, which can all be accessed via the BI Guidelines page
in the corporate portal. It is mandatoy to get familar with all the guidelines in parallel.
Especially the guidlines around Naming Conventions, ABAP Programming Guidelines, Data
Replication and Information Lifecycle Management are highly relevent.
1.3
1.4
Concept
LSA describes the design of service-level oriented, scalable, best practice SAP BW architectures
founded on accepted Enterprise DataWarehouse principles as introduced in Bill Inmon's Corporate
Information Factory (CIF) that has been adopted by SAP BW (see SAP BW Reference Architecture in
the Introduction above, both pictures below are taken from these presentations).
This is a SEVEN layered architecture. Every layer of this pattern has been designed to serve the
particular purpose.
Below is an example for a architecture used for large BW implementation on enterprise level. This
architecture will help in boosting the overall performance of the system and making the
implementation flexible enough to adapt future enhancements.
1.5
Layers at SAP
LSA is built during the NewBI transformation projects to ensure right governance and
implementation technology.
NewBI systems/platform consists currently of BWP, HCA and HCP. BWP (SAP BW) is clustered
into the Enterprise DataWarehouse Layer (EDL) and Application Data Layer (ADL) with common
Master Data Objects.
The extraction of data can be done with minimum effect on the Information Content Area
(Application Data Layer and Solution Layer). The purpose of having a LSA is to create a
Logical Single point of entry - Single point of the fact - Single point of distribution
Easier access to data
Business users and IT easily governed information assets to make IT a strategic asset that
drives strategy and execution
Everybody will be querying information based on the same set of data
Getting data out of source system
Storing all data for later use and reuse
Clean , Harmonize and Integrate data to be used for creating information
The LSA at SAP is separated into different layers:
1.6
The layers are seperated into Information and Data Content Area's (ICA and DCA).
Data and Information Content Area are a logical grouping, it encompasses the ownership,
development authorization and are the main building blocks of the BW environment at SAP.
Data Content Area are linked to the DataSources and focused on data, the extraction, cleaning and
harmonization of data.
Information Content Area is linked to the topside of the BW system and focuses on the
transformation of data into information.
The Data Content Area is physically implemented by the Enterprise DataWarehouse Layer (EDL), the
Information Content Area by the Application Data Layer (ADL) and Solution Layer.
Layers of the Information Content Area:
Virtualization Layer (M)
Data Mart Layer (P)
Business Transformation and Integration Layer (O)
Operational Data Store (H)
Exchange Zone (X)
Layers of the Data Content Area:
Cross Business Transformation Layer (C)
Propagation Layer (D)
Quality and Harmonization Layer (Q)
Corporate Memory (T)
Acquisition Layer (A)
Attention: The authorization Data and Information Onwer Profiles are related to the Information
Content Area. The Data Content Area has a specific profile. No reporting and SID enablement for this
layer.
The Data and Information Content Area gives the functional part of the scalability by splitting the
data and information into groups of data flows that can be processed in parallel.
Layer
DataStoreObject
Layer Implementation
Implementation
Acquisition
Layer
Mandatory
Optional
Default
DataStoreObject
Type
Exceptional
DataStoreObject
Type
Standard
No exception
Corporate
Memory
Mandatory
(Transactional Data:
always / Master Data: Mandatory
see details in the Layer
Description)
Write-Optimized
(No semantical
key!)
No exception
Quality and
Harmonization
Layer
Optional
Mandatory
Write-Optimized
or Standard
No exception
Propagation
Layer
Mandatory
Mandatory
Write-Optimized
(No semantical
key!)
No exception
xBTL
Optional
Mandatory
Standard
No exception
1.6.1.1
Minimum required EDL data flow for transactional data in Data Content Area (Single
System Support)
Sinlge System Support means that Data and Information Content Areas are residing in the same
system.
Here the minimum required data flow is described *w/o* usage of the Cross Business
Transformation Layer:
Here the minimum required data flow is described *with* usage of the Cross Business
Transformation Layer:
The delta propagator is usually used to deliver the periodic deltas to the connected application(s).
The full propagator is used for the initialization of a new application or the re-initialization of an
existing application on explicit request from the corporate memory.
The next picture describes the data flow for both cases (periodic delta and historical data reload on
request):
Minimum required EDL data flow in Data Content Area (Multiple System Support)
Multiple System Support means that Data and Information Content Areas are deployed in separate
SAP BW instances. The Export DataSource and the shielding InfoSource in the target system can also
be considered as part of the Data Content Area (EDL).
Here the minimum required data flow is described *w/o* usage of the Cross Business
Transformation Layer:
Here the minimum required data flow is described *with* usage of the Cross Business
Transformation Layer:
Here we can see that it basically follows the same rules as in the Single System Support scenario.
1.6.1.3
Minimum required EDL data flow for master data in Data Content Area
The data flow for master data differes partitially from the transactional data flow. The following
picture describes the minimum required data flow:
In the Data Content area, the data model focus on getting data from the DataSource to the
Propagators (or xBTL). During this flow the data should be cleaned and harmonized.
In the DCA's the following layers exists:
Acquisition Layer (Prefix A)
The inbound part of the Acquisition Layer corresponds to the PSA objects from the source
system. The purpose of this layer is to do the mapping between fields from the DataSource
to InfoObjects in an Outbound InfoSource + adding technical information. This layer is
mandatory.
Corporate Memory (Prefix T)
Store all request from all DataSources the life insurance. This layer is mandatory.
Quality and Harmonization Layer (Prefix Q)
Alignment of data to common standard and corporate rules. This layer is optional.
Propagation Layer (Prefix D)
Supplies digestible and unflavored data to create information applications in the Information
Content Area. The layer is mandatory.
Cross Business Transformation Layer (Prefix C)
Supplies digestible and unflavored data with central corporate business logic to create
information applications in the Information Content Area. This layer is optional.
1.6.1.4.1 Acquisition Layer
The inbound part of the Acquisition Layer corresponds to the PSA objects from the source system.
The purpose of this layer is to do the mapping between fields from the DataSource to InfoObjects in
an Outbound InfoSource + adding technical information.
It serves as a fast inbound layer accepting data 1:1 for temporary storage
All fields of the DataSource must be mapped to a corresponding (naked) InfoObject in the
Acquisition Layer Outbound InfoSource
No transformation of data in the Acquisition Layer is allowed only the routines needed to
add the technical fields and the mapping between field names and InfoObjects. If the
DataSource are of questionable quality use fields of type CHAR in the DataSource and make
the quality check in the Quality and Harmonization Layer:
o If you are expecting questionable dates in your source data, the check of the dates
should be done in the Quality and Harmonization Layer. Make sure that your
InfoObject is not referencing the 0DATE as this will cause a dump.
o Upper/Lower case If your mapping in the Acquisition Layer can have both upper
and lower case characters flag the InfoObjects as Lower case No SIDs are
generated!
The "no transformation" of data rule also means that you may not flag any keys in the
definition of the Outbound InfoSource. This is the "No keys in the InfoSource" rule!
Main rule is to have only the Outbound InfoSource placed in the Acquisition Layer.
Special Case: Alternatively the Outbound InfoSource could be replaced by a Standard DSO shielded
by an Outbound InfoSource. A DSO should only be used if the extractor only deliveres full data loads.
If a corresponding full load deliveres > 1 Mio records the usage of a Standard DSO shielded by an
Outbound InfoSource is mandatory (please consider there a consequent ILM in your process
chains, see Information Lifecycle Management Guidelines). In this case the Standard DSO is
leveraged to calculate the deltas. Other uses cases as well as the utilization of a Write-Optimized DSO
has to be aligned with Architecture. The transformation into a DSO should still only be 1:1 with the
addition of technical information.
As an Outbound InfoSource has to be placed in the data flow after the DataSource (or DSO) the
created InfoSource has to add the following technical information in the Outbound Transformation
of the DataSource (or DSO):
Original Source (technical name of DataSource and Source System + Source System ID) > provides unique determiniation of the DataSource and Source System
o /EDL/CS01DATS Origin: DataSource
o /EDL/CS02SSYS Origin: Source System
o /EDL/CS03SSID Origin: Source System ID
Original DTP Request timestamp, date and time (entering the BW system) -> provides
technical uniqueness and explicit identification of source data
o /EDL/CS04LDAT Origin: DTP Request Load Date
o /EDL/CS05LTIM Origin: DTP Request Load Time
o /EDL/CS08TMSP -- Origin: DTP Request Load Timestamp (short)
Original PSA Request (entering the BW system) -> provides technical uniqueness and explicit
identification of source data
o /EDL/CS06LREQ Origin: PSA/ODS Source Request (GUID)
o /EDL/CS07LRNO Origin: PSA/ODS Source Request (SID)
Original DTP Request, Data Package and Record number (entering the BW system) ->
provides technical uniqueness and explicit identification of source data
o /EDL/CS09DPID -- Origin: DTP Request Data Package Number
o /EDL/CS10RECN -- Origin: DTP Request Data Package Record Number
o /EDL/CS11DTPG -- Origin: DTP Request (GUID)
o /EDL/CS12DTPS -- Origin: DTP Request (SID)
Adding this information HAS TO BE DONE in the outbound transformation of the DataSource
(or DSO).
Routing this information to the Acquisition, Q&H, Propagation and Corporate Memory
Layer is MANDATORY.
Routing this information to the xBTL and ADL IS NOT NEEDED.
Main purpose:
- Identification of request that needed to be reloaded from the Corporate Memory into the
Propagator
- Supporting any other kind of reloading activities from the EDL
Adding the Data Package and Record Number is crucial if the upper data flow consists of a
SPO in the Corporate Memory Layer. By constructing a semantical key that is MANDATORY
in this case with the request, data package and record number undesired aggregations in
transformations are getting avoided. It is recommended to use the PSA Request (DTP
Request is also possible, advantage of using the PSA Request is an end-to-end identification).
Special Case: "DataStore in Acquistion Layer to calculate deltas" - Extractors only provide you
with full data loads
o Here place in the transformation from the DataSource to the Acquisiton Layer
DataStoreObject only the "Original Source" fields and derive the values with the help
o
o
of the Data Acquisition Layer Routine Library methods. The other fields you need to
calculate in the outbound transformation of the Acquistions Layer DataStore Object.
"Original Source" fields need to have a 1:1 mapping.
"PSA Request" fields: in this case it contains the "DataStoreObject Activation
Request" GUID/SID.
Only in this case delta requests are calculated in the Acquistion Layer
DataStoreObject.
Please Note: no (other) Logic between Acquisition layer outflow InfoSource and Corporate
Memory and back
See Tool Box to get a detailed instruction to get the mandatory technical information (Data
Acquisition Layer Routine Library). Please read the guideline carefully, do not just stick to existing
examples in the system. Additional information around special case "DataStore in Acquistion Layer
to calculate deltas" in combination with HANA InMemoryOptimized DataStoreObjects can be found
here: Change Log Compression.
1.6.1.4.2 Corporate Memory
Corporate Memory requires the DataSource(s) mapped to a unique DSO in the Corporate Memory
Layer and all requests must be stored in this DSO (only Write-Optimized DSOs are allowed).
No transformation of data is allowed loading data into the Corporate Memory Layer from the
Acquisition Layer Outbound InfoSource. To create the data flow into the Corporate Memory Layer as
well as the flow back into the Acquisition Layer Outbound InfoSource a Corporate Memory Inbound
InfoSource as well as Corporate Memory Outbound InfoSource has to be used (shielding of the
DSO).
Also for master data flows usage of the Corporate Memory is mandatory if source data can be
deleted (e.g. FlatFile DataSources).
1.6.1.4.3 Quality and Harmonization Layer
In this layer data will be checked for quality and harmonization according to corporate standards.
This optional layer has to be implemend by the adaption of a Standard DSO (optional: shielding by
Inbound and Outbound InfoSources). All deviations (e.g. usage of a Write-Optimized DataStore
Object) have to be alligned with Architecture.
The source is the Data Acquisition Layer and the Corporate Memory Layer.
Flavours of the Quality and Harmonization Layer:
Technical harmonization
Format, length, etc.
Simple format check; Text field, date, etc.
Upper case
Master data referential integrity
Master data integration into one single model
Compounding, concatenation, etc.
Best record
Common transformation, adding non-application specific information, etc.
Amount in different currencies
Quantity in different unit
DTPs)
Create an InfoSource
that match the
result DSO 1:1 but
without any key. Map
all DSO into this
InfoSource
You cannot map from
one InfoObject to
another. All mappings
must go 1:1. Only fields
present in the
source DSOs are mapped
If you have a non timedependent DSOs you
must map the fields for
DATETO and DATEFROM
to constant values of
19000101 and
99991231
In the transformation between
the InfoSource and the
Target DSO you must place the
following code.
Digestible
Ready to consume
Unflavored
o No application specific transformations
o Data should give the possibility to compare and verify with the source system
Integrated
o Common semantics
o Common values
o Clean
o All sets of data should be disjuncted - no intersections between Data Content Areas
should exist
Harmonized data
o Smoothing data
o Technically unified values (e.g. compounding)
Trimmed to fit DataSources and data persistencys to reduce data complexity for applications
by
o Extending data by looking up information, which applications frequently ask for
o Merging different but highly related DataSources and store data in a single
propagator, if application always or frequently request them together
o Collecting data from the same (or similar) DataSource but from different source
systems to less or a single source system independent propagator
For using the Propagation Layer consider the following:
Transactional Data Flow:
Consist of Write-Optimized DSOs shielded by Inbound and Outbound InfoSources that gives
a unified data transfer behavior
The Twin Propagator approach is mandatory for transactional data. The Delta Propagator
contains only requests which have not yet been updated to all connected applications, the
Full Propagator is filled only upon request when an application needs a reload from
Corporate Memory.
Data must be stored at the level of granularity given by the DataSource(s). No information
originally deliverd from the DataSource must be lost on its way to the Propagators (this
implies the necessity of Write-Optimized DSOs without any semantical key fields).
Data is integrated; Company Code "SAP AG" in propagator #1 and Company Code "SAP AG"
in propagator #2 is in both cases identified as 0001
Master Data Flow:
Consist of Standard DSOs that gives a unified data transfer behavior
For master data entities that are to be considered as a relevant business object (e.g. material
master, employee master, profit center) or for high volume master data (> 1 Mio. records)
shielding by Inbound and Outbound InfoSources is mandatory
For master data entities that are to be considered as a text, organizational or control master
data (e.g. company code, industry code) and a data volume between 1000 and 1 Mio.
records shielding by Inbound and Outbound InfoSource is optional
For master data entities < 1000 records shielding by Inbound and Outbound InfoSource
is not reasonable
Data must be stored at the level of granularity given by the DataSource(s)
Partitioning:
Implementation as Semantic Partioned Object (SPO) or own partioning strategy is
mandatory for a data volume > 100 Mio. records (for ILM purposes, query performance,
etc.)
1.6.2.1
The Data Mart (along with the objects in the Virtual Reporting Layer) is build with an eye on the
reporting needs. All reporting requirements must be met in the modeling of the Data Marts.
In modeling the Data Mart you must consider:
*KPIs to be reported
*Granularity of the information
Also performance has to be considered. Please consider also the Pro's und Con's around using
InfoCubes or DSOs (see this article here for InMemory HANA Optimized InfoCubes and DataStore
Objects). As only Standard DataStore Objects are allowed and reporting usage is possible consider to
set the flag "Create SIDs" either to "During Reporting" or "During Activation" (in case of using the
setting "During Activation please consider corresponding guidlines on DataStore Object Batch
Settings). InfoSets are technical possible but not recommended, CompositeProvider technology
should be used instead.
The Data Mart Layer could either be connected in the data flow before to a InfoSource of the
Business Transformation Layer or a shielding Outbound InfoSource of the persistend Business
Integration Layer.
InfoCubes are not allowed as a source in the data flow.
No query creation is allowed on this layer, Persistency Objects in the Data Mart Layer can also be
shielded by InfoSources.
Objects from this layer have to be included into a MultiProvider or CompositeProvider for reporting.
1.6.2.3
Contains Real Time, Near Real Time and Operational Data. For operational data Local Provider in a
SAP BW Workspace and Standard/Direct Update DSOs are possible. For Standard DSOs consider to
set the flag "Create SIDs" either to "During Reporting" or "During Activation" for Direct Update
DSOs "During Reporting".
For Real Time and Near Real Time VirtualProvider and HybridProvider are possible.
No query creation is allowed on this layer, InfoSource shielding is possible.
Objects from this layer have to be included into a MultiProvider or CompositeProvider for reporting.
1.6.2.4
Exchange Zone
Contains data for external system export (systems that are not part of the NewBI system landscape).
This layer could only include Open Hub Destinations, here no InfoSource shielding is relevant.
No reporting is allowed on this layer, also no inclusion into a MutliProvider or CompositeProvider is
allowed.
Data supply only allowed from the Business Transformation/Integration/or Data Mart Layer.
1.6.2.5
1.7
Allowed Lookups
Remarks
1.8
The following matrix is showing the allowed data flow in the LSA from a Transformation perspective.
This is also reflected by the related authorization profiles for developing content (profile
BI:EDL:TEAM + Data Owner and Information Owner profiles).
Data flow that is already existing and has been created following the LSA 1.0 guidelines can still be
changed, but a full adaption of the LSA 2.0 should be executed nethertheless.
The allowed sources and targets for Transformations and DTPs can be derived from this matrix.
Transactional Data Matrix
The XLS version can be found here: Allowed Data Flow 1.0
1.9
The overall BW architecture concept sees BWP as central staging system for IWP and OPP:
1. IWP and OPP get their data only from BWP. A direct connection from other source systems
(like ISP or ICP) to IWP and OPP is not allowed. All data loaded into IWP and OPP must be
staged through BWP's EDL.
2. Exceptions can be made for real-time/virtual accesses to source system data but those
scenarios have to be discussed with the Architecture Team in advance. The general
recommendation is to build real-time scenarios in BWP.
3. Data flows from IWP or OPP back to BWP are strictly forbidden. Applications that provide
data for other applications (through EDL) have to be built in BWP. Exceptions are one-time
loads of historical data if an application is migrated from IWP/OPP to BWP.
Rules 1 and 2 apply only to new applications in IWP and OPP. Existing applications and contents may
continue to load from other source systems than BWP as long as there is no migration planned.
Temporary exceptions to rules 1 and 2 have been approved by Jrgen Habermeier for the IWP
Transformation Program (IWP sunset preprations). The following IWP DataSources may be
connected to BWP, data loads have to follow BWP's architecture guidelines (usage of EDL):
IWP DataSource
8DPAPAEA
8DPAPAEN
Limited until
MD7.2014
MD7.2014
Solution Architect
Nicolas Cottin
Nicolas Cottin
MD7.2014
MD7.2014
MD7.2014
Nicolas Cottin
Nicolas Cottin
Nicolas Cottin
MD7.2014
Nicolas Cottin
MD7.2014
Nicolas Cottin
MD7.2014
Nicolas Cottin
MD7.2014
Nicolas Cottin
MD7.2014
Nicolas Cottin
MD7.2014
Nicolas Cottin
8PPCA_P01
Description
Headcount (Adjustments)
Headcount (Closed Months
Interim)
Headcount (Closed Months)
Headcount (Open Months)
Personnel Actions
(Adjustments)
Personnel Actions (Closed
Months Interim)
Personnel Actions (Closed
Months)
Personnel Actions (Open
Months)
Positions and Vacancies
(Closed Months Interim)
Positions and Vacancies
(Closed Months)
Positions and Vacancies
(Open Months)
PCA: Plan Transaction Data
MD8.2014
8PROLFCS01
MD9.2014
8PROLFFI01
MD9.2014
8PROLFHR01
Rolling Forecast@SAP:
Finance
Rolling Forecast@SAP: HR
8PROLFIV01
Rolling Forecast@SAP:
MD9.2014
Andreas
Weisenberger
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
8DPAPAE1
8DPAPAE2
8DPAPACA
8DPAPACN
8DPAPAC1
8DPAPAC2
8DPAPAVN
8DPAPAV1
8DPAPAV2
MD9.2014
IWP DataSource
Description
Investment
Rolling Forecast@SAP:
Markets
Rolling Forecast@SAP:
Material
Rolling Forecast@SAP: Sales
Limited until
MD9.2014
ZROLF_EXT_PROLFCS01
ZROLF_EXT_PROLFFI01
PROLFFI01 RDA
MD9.2014
ZROLF_EXT_PROLFHR01
PROLFHR01 RDA
MD9.2014
ZROLF_EXT_PROLFIV01
PROLFIV01 RDA
MD9.2014
ZROLF_EXT_PROLFMA01
PROLFMA01 RDA
MD9.2014
ZROLF_EXT_PROLFMK01
PROLFMK01 RDA
MD9.2014
ZROLF_EXT_PROLFSW01
PROLFSW01 RDA
MD9.2014
ZROLF_EXT_PROLFSW02
PROLFSW02 RDA
MD9.2014
ZROLF_EXT_ROLFADM31_ATTR
ROLFADM31 RDA
MD9.2014
ZROLF_EXT_ROLFADM31_TEXT
ROLFADM31 RDA
MD9.2014
8DYBPRA01
YCRM_BP Attributes DL
MD10.2014
8PMKISMA01
8PMKISKPIP
80CRM_MKTELMH
2015
2015
2015
8PROLFMK01
8PROLFMA01
8PROLFSW01
8PROLFSW02
MD9.2014
MD9.2014
MD9.2014
MD9.2014
Solution Architect
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Sead Pozderac,
Tarkan Citirgi
Iulia Maurer,
Chandramouli
Reddy
tbd
tbd
tbd
Any further exceptions have to be aligned with Jrgen Habermeier, Bert Glaser and BI Platform
Team.
Implementation
Examples in BWP
VirtualProvider based
on DTP
VirtualProvider based on
Function Module
Outdated: CRM
Opportunities (ADRM)
Other
1.12 Planning
1. Planning applications are created in the SAP BW systems and data is created in the Architected
Data Mart Layer. The general rule for data flow also applies here, e.g. data created in the planning
applications cannot flow down into the Business Transformation/Integration Layer. There is no
separate Planning Layer. All planning solutions (DSOs and Infocubes) have to be developed in O and P
Layer
2. Planning data is corporate data and needs to follow the general flow of data, thus it must enter the
Acquisition Layer and follow the rules for creating content (if not planned directly in SAP BW).
3. Corporate querying of planning data is done using the designed data flow in SAP BW. You may only
use the Planning InfoCube (via the Virtual Layer)) or Aggregation Level for querying as part of the
planning application. The virtualization layer can include data from several applications which
includes planning applications
4. If there are providers in the business integration layer (O) or data mart layer (P) which have data
required for planning in the required granularity etc. This data can be used in planning models and be
read with planning functionality.
5. If a planning solution requires data with another granularity or additional business logic which is
currently not available they should realign with the responsible product/solution architect in order to
check whether an existing content can be enhanced or a new content has to be created for this
planning solution.
6. If a content requires planning figures from another content these planning figures have to be
provided through the EDL Layer how it is defined in this guideline. Therefore the open hub has to be
used to create a table which then can be used in the EDL acquisition layer.
2