Professional Documents
Culture Documents
Flat File
SAP BW
LIS Extraction
Introduction Steps Involved in LIS Extraction Internal Architecture for LIS Extraction Conclusion Question and Answers
S260
M C X X X X
VBAK VBAP
VBEP
VBUK VBUP
LIS Extraction
LIS Extraction
Steps to create Information Structure
1. Create Field Catalogs (MC18) Similar to Info Object Catalog. 2. Create Information Structure (MC21) Similar to info Cube. 3. Set up the Update Rules (MC24) Similar to Update rules.
LIS Extraction
( SAP Defined Information Structure )
Step :- 1
Goto T-code LBW0.
LIS Extraction
Steps :- 2 Give the Information Structure Name
LIS Extraction
Steps :- 3 See the Present Setting for the Information Structure
LIS Extraction
Steps :- 4 ( IF Datasource is not generated ) Generating the Data Source
LIS Extraction
Steps :- 5 Deactivating the Delta Update LBW1
LIS Extraction
LIS Extraction
Steps :- 6 Running the Statistical setup to fill the data into Information Structure
LIS Extraction
Version Maintenance : Normally it is 000 version So load the data to version &(A and copy the data from this version to 000
Date of Termination :
Click on Execute
LIS Extraction
LIS Extraction
Copy the data from the version &(A to 000 using LBW2
LIS Extraction
Steps :- 7 Using RSA6 transport the Data Source
Steps :- 8 Replicate the Data Source in SAP BW and Assign it to Info source and Activate
LIS Extraction
Steps :- 9
In SAP R/3 Goto LBW1 and set the updating to No Updating and Save.
LIS Extraction
Steps :- 10 Go Back to SAP BW and Create the Infopackage and run the Initial Load
Steps :- 11
Once the Initial delta is successful before running delta load we need to enable delta updating in SAP R/3
LIS Extraction
LIS Extraction
Steps :- 12 Once the Delta is activated in SAP R/3 we can start running Delta loads in SAP BW.
S260
Sales Order Applications VA01, VA02, VA03
M C V B A K M C V B A P
V1,V2
S260BIW1 S260BIW2
Delta Load
LIS Extraction
Conclusion
Disadvantages of LIS
LO Extraction
Introduction Steps Involved in LO Extraction Internal Architecture for LO Extraction Conclusion Question and Answers
LO Extraction
Steps :- 1 First Activate the Data Source from the Business Content using LBWE
LO Extraction
Steps :- 2 For Customizing the Extract Structure LBWE
Steps :- 3
Steps :- 4
Once the Data Source is generated do necessary setting for Selection Hide Inversion Field Only Know in Exit And the save the Data Source
Steps :- 5
Steps :- 7
Replicate the Data Source in SAP BW and Assign it to Info source and Activate Steps :- 8 Running the Statistical setup to fill the data into Set Up Tables
Steps :- 9 Go Back to SAP BW and Create the Infopackage and run the Initial Load. Steps :- 10 Once the Initial delta is successful before running delta load we need to set up V3 Job in SAP R/3 using LBWE.
Steps :- 10 Once the Initial delta is successful before running delta load we need to set up V3 Job in SAP R/3 using LBWE.
Steps :- 11 Once the Delta is activated in SAP R/3 we can start running Delta loads in SAP BW.
LO Extraction
Conclusion
V3
Delta Update Modes advantages of LO Comparison of LIS to LO
Delta Extraction with the V3 Update (II) Data Flow Schematic for Logistics Extraction with the V3 Update
Document 1
V1
Docu. Tables Docu. Tables
Document 2
V1
Delta Request
Document n
Transfer to BW
B Delta Queue
(Stopped qRFC)
Update Tables
BW (PSA,
ODS, Cube)
Time
Update Methods
1) Serialized V3 Update Since the V3 update actually does not recognize the serialized processing of update data, the Serialized V3 Update function was created through several corrections in SAP Basis in order to also be able to serialize in step A.
Document 2
V1
Delta Request
Document n
Transfer to BW
B Delta Queue
(Stopped qRFC)
Update Tables
BW (PSA,
ODS, Cube)
The following problems continue to occur in conjunction with the V3 update in the logistics extraction of transaction data:
The serialized V3 update can only ensure the correct sequence of extraction data for a document if the document is not repeatedly changed within the span of a second. Furthermore, the serialized V3 update can only ensure the correct sequence of extraction data for a document if the times are permanently and exactly synchronized for all instances in a system. This is because the creation time of the update record, which is determined by the local time for the application server, is used for sorting the update data. In addition, the serialized V3 update can only ensure the correct sequence of extraction data for a document if it previously had no errors in the V2 update. This is because the V3 update only processes the update data that is successfully processed with the V2 update. Independently of the serialization, update errors that occur in the V2 update of a transaction and which cannot be reposted have the consequence that the V3 updates for the transaction that are still open can never be processed. This can thus lead to inconsistencies in the data in the BW system.
VBHDR
(Each step is a full table scan)
:EN: :EN: :DE: :EN: ....... :EN: :DE: ....... :DE: :EN: ....... :EN: .......
VA02 VA01
VA02
Document 1
V1
Docu. Tables Docu. Tables
Document 2
V1
Delta Request
Document n
V1
Docu. Tables
Transfer to BW
B Delta Queue
(Stopped qRFC)
BW (PSA,
ODS, Cube)
Time
Document 1
V1
Docu. Tables Docu. Tables
Document 2
V1
Delta Request
Document n
V1
Docu. Tables
Transfer to BW
B Delta Queue
(Stopped qRFC)
Extraction Queue
BW (PSA,
ODS, Cube)
Time
CO-PA Extraction
CO-PA collects all the OLTP data for calculating contribution margins ( Sales, cost of sales, overhead costs) CO-PA also has power reporting tools and planning functions however co-pa reporting facility is limited to Integrated cross-application reporting concept is not as differentiated as it is in BW. OLTP system is optimized for transaction processing and high reporting work load has negative impact on the overall performance of the system.
Flow of Actual Values The Production Variances calculated for the cost objects i.e. the difference between the actual cost of the goods manufactured and the standard costs are divided into variance categories and settled to profitability segments (example production orders). What are the top products and customers in our different divisions. This is one of the typical questions that can be answered with CO-PA Module.
Basic Concepts
Characteristics are the fields in an operating Concern according to which data can be differentiated in Profitability Analysis. Each characteristic in an operating concern has a series of valid characteristic values. Profitability is a fixed combination of valid characteristic values.
Characteristics
Some Characteristics are predefined in Operating concern like Material, Customer, Company Code. In addition to these fixed characteristics we can define upto 50 characteristics of our own. In most cases we will be able to satisfy our profitability analysis requirements with between 10 to 20 Characteristics.
Value Fields Key Figures like revenue, cost of goods sold, overhead costs are stored in value fields.
Organizational Structure
Organizational Structure The Value fields and Characteristics that are required to conduct detailed analysis vary from industry to Industry and between Individual Customers. In CO-PA we can configure structure of one or more operating concerns in each individual installation. An operating concern is an Organizational Structure that groups controlling areas together in the same way controlling areas groups companys together
Data Base Structures in CO-PA Actual Line Item table: CE1xxxx Plan Line Items: CE2xxxx Line Item Contain some information at document level that most cases is too detailed for analysis example CO-PA Document Number, Sales document Number, Posting date. CO-PA Maintains summarization of data used by all CO-PA functions like reporting, planning, assessments, settlements and so on.
Data Base Structures in CO-PA Actual Line Item table: CE1xxxx Plan Line Items: CE2xxxx Line Item Contain some information at document level that most cases is too detailed for analysis example CO-PA Document Number, Sales document Number, Posting date. CO-PA Maintains summarization of data used by all CO-PA functions like reporting, planning, assessments, settlements and so on.
Data Base Structures in CO-PA Segment Table: CE4xxxx The Characteristics that describes the market are first separated from the rest of the line Items. Each combination of characteristic vales is stored in profitability segment number. The Link between the profitability segment number and characteristic values is maintained in Segment Table.
Data Base Structures in CO-PA Segment Level: CE3xxxx The Value fields are summarized at profitability segment and period levels and stored together with these fields in Table CE3xxxx. This table contains total values of the period for each Profitability segment number.
Storage Procedure
Storage Procedure We can compare an operating Concern associated by segment table and segment level to an Info cube. Info cube comprises Dimension Table Segment Table) and the fact table ( Segment Level). Unlike Fact table the segment level key contains other keys like Processing type in addition to the key field from the segment table. Characteristics in Info Cube corresponds to characteristics( Or Attributes) in Info Cube. Value fields can be regarded as a Key Figures.
Storage Procedure Summarization level in Operating Concern have the same function as aggregates for an Info Cube, the difference is that aggregates for Info Cube are managed together with the Info Cube Itself while summarization levels are updated at regular intervals usually daily. Line Items in CO-PA is comparable with Operational Data Store
Data Staging Overview To Provide Data Source for BW all CO-PA Data Sources must be generated in the Source System. Data Sources can be defined at Operating Concern and client level. Data Source Contains the following Information Name of the Operating Concern Client Subset of the Characteristics Subset of the Value fields Time Stamp which data has already been loaded in to BW.
Display IMG
Creating Data Source Since Data Source is always defined at operating concern and Client levels a standard name is always generated which starts with 1_CO_PA_<%CL>_<% ERK . We can change this name if necessary however the prefix 1_CO_PA is Mandatory.
Header Information
Characteristics from the segment table are the characteristics that are maintained in transaction KEQ3. By Default all characteristics are selected.
Data Source-Accounting Base When we generate CO-PA Data Source select accounting based option. Fields KOKRS, BUKRS, KSTAR are compulsory. There are no Characteristics available for line Items because accounting based data sources do not contain characteristics. There are no value fields or calculated key figures available KOKRS and PERIO must be selected as selection fields.
SAP AG 2002
FI-SL Extraction
Contents
FI-SL: Positioning and Overview Data Structures in FI-SL Generating Data Source Extractor Extracting Application Data
FI SL: Positioning FI-SL is a system in which data (Planned and Actual) from different levels of OLTP applications. EX: FI-GL, CO-CCA can be combined. In R/3, FI-SL enhances the functions and usability of these applications FI-SL Includes planning functions and reporting tools . FI-SL reporting is however restricted : Cross application reporting is not as diverse as SAP BW reporting. Also OLTP system is optimized for transaction processing and a high reporting work load would have a negative impact on overall performance.
FI-SL Overview
FI-SL Overview In FI-SL application we can define our own ledgers for reporting purpose, we can run these ledgers with any account assignment objects from various applications ( account, cost center, business area, profit center). Modules available in SL give us several options for manipulating data that has been transferred from other applications of SAP and external systems into the SL. Ex: Collecting Information, combining information, Forming totals, modifying totals and so on.
FI-SL Overview SL is a receiver system in which data from various other applications can be stored. Flexible Data Structures: The addition of an extra field to a financial accounting document is an example of FDS. This field can be field either in the application when the document is posted or through a data upload. A ledger is updated either on company code level or the company level.
FI-SL Overview Adjustment postings ( Direct Data entry) can be made in the FI-SL System and it allows Post different versions of documents Post documents with a balance not equal to zero. enter additional currency amounts manually. Fiscal year Variant: It determines the number of periods in a fiscal year and it enables to create weekly or monthly reports. Validations: It allows us to check or modify the validity of data when it enters the FI-SL system
Data Flow
Data Flow
In addition to Data from FI, CO, MM and SD external data and data that is entered directly can also be posted into FISL. Update takes place either online or as subsequent processing. With Subsequent processing a predefined number of data records are transferred to the FI-SL Data base tables at a certain point in time We can use various functions to update data in the FI-SL. Validation: It checks whether there are any freely definable rules for the data that is going to be update. Substitution: Replaces the data with modified data before update.
Data Flow
Field Transfer: Fields that are transferred determine which characteristics are used in the FI-SL System. Operations available: The Currency translation function translates amounts that have been posted to the ledger in the FI-SL System. Balance Carry forward: At the end of the fiscal year we use this function to transfer actual values and Planned values to the previous fiscal year in to the new fiscal year. We can create rollup ledgers containing cumulated and summarized data from one or more of the other ledgers to speed up report processing time.
Data Structures in FI-SL: Table Group ZZ When we create an FI-SL Table Group upto 5 tables with fixed naming conversions are created. Summary table(.T) Actual Line Item Table(A) Plan Line Item table(.P) Object Table_1(Object/Partner) (.O) Optional ObjectTable_2(Movement Attribute)(C)
Data Structures in FI-SL: Table Group ZZ In the Object Table an object number is assigned to a combination of characteristics.( Similar to Dimension Table in BW) The Key figures values are stored in a Summary Table along with the resulting Object Numbers. Similar to the Fact Table in BW
Summary Table
Summary Table Period Block is a particular feature in Summary Table Data Model: There is a key field RPMAX (Period) that specifies the meaning of the key figures TSL16, HSL16 and KSL16 Meaning that the RPMAX value is 016, the values for the key figures refers to the period 016 of the fiscal year variant that is set in the Customizing. Handling Currencies: The Key Figures TSL01 To TSL16 are always assigned to Transaction Currency that is specified in RTCUR field (Currency Key)
Handling Currencies
The Key Figures HSL01 to HSL16 are assigned to the secondary currency that is specified in Customizing: EX: Local Currency. The Key Figures KSL01 to KSL16 are assigned to the third currency that is specified in Customizing. Ex: Company Code Currency. The Key Figures MSL01 to MSL16 are assigned to quantity Value.
Ledgers in FI-SL
A ledger is a closed reporting area that always refers to exactly one summary table. Ledger is a logical part of summary table. We can define several ledgers for summary table. A Ledger is created for summary table in customizing. Physical storage of transaction data is always remains in the tables of table group.
Updating the FI SL in BW
BW Data source prepares data that is going to be updated in BW. SAP is not able to ship FISL Data Sources with BW Business Content for non standard ledgers To Provide FI SL Data source for BW an Extract Structure must be generated for FI-SL Summary Table and finally we have to define Data Source for the Summary Table Ledger.
Step1
Step2