You are on page 1of 27

1. What are the extractor types?

Application Specific
BW Content FI, HR, CO, SAP CRM, LO Cockpit
Customer-Generated Extractors

LIS, FI-SL, CO-PA


Cross Application (Generic Extractors)
DB View, InfoSet, Function Module

2. What are the steps involved in LO Extraction?


The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta Queue Monitor

3. How to create a connection with LIS InfoStructures?


LBW0 Connecting LIS InfoStructures to BW

4. What is the difference between ODS and InfoCube and MultiProvider?


ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for
drilldown and RRI.
CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
MultiProvider: Does not have physical data. It allows to access data from different
InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.

5. What are Start routines, Transfer routines and Update routines?


Start Routines: The start routine is run for each DataPackage after the data has been written
to the PSA and before the transfer rules have been executed. It allows complex
computations for a key figure or a characteristic. It has no return value. Its purpose is to
execute preliminary calculations and to store them in global DataStructures. This structure
or table can be accessed in the other routines. The entire DataPackage in the transfer
structure format is used as a parameter for the routine.

Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start
Routine. It is independent of the DataSource. We can use this to define Global Data and
Global Checks.

6. What is the difference between start routine and update routine, when, how and why are
they called?
Start routine can be used to access InfoPackage while update routines are used while
updating the Data Targets.
7. What is the table that is used in start routines?
Always the table structure will be the structure of an ODS or InfoCube. For example if it is an
ODS then active table structure will be the table.

8. Explain how you used Start routines in your project?


Start routines are used for mass processing of records. In start routine all the records of
DataPackage is available for processing. So we can process all these records together in
start routine. In one of scenario, we wanted to apply size % to the forecast data. For
example if material M1 is forecasted to say 100 in May. Then after applying size %(Small
20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against
one single record that is coming in the info package. This is achieved in start routine.

9. What are Return Tables?


When we want to return multiple records, instead of single value, we use the return table in
the Update Routine. Example: If we have total telephone expense for a Cost Center, using a
return table we can get expense per employee.

10. How do start routine and return table synchronize with each other?
Return table is used to return the Value following the execution of start routine

11. What is the difference between V1, V2 and V3 updates?


V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same
time as the document update (in the application tables).
V2 Update: It is an Asynchronous update. Statistics update and the Document update take
place as different tasks.
V1 & V2 don't need scheduling.

Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE).
Here, document data is collected in the order it was created and transferred into the BW as
a batch job. The transfer sequence may not be the same as the order in which the data was
created in all scenarios. V3 update only processes the update data that is successfully
processed with the V2 update.

12. What is compression?


It is a process used to delete the Request IDs and this saves space.

13. What is Rollup?


This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have
not performed a rollup then the new InfoCube data will not be available while reporting on
the aggregate.

14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
It is the method of dividing a table which would enable a quick reference. SAP uses fact file
partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER.
Table partitioning helps to run the report faster as data is stored in the relevant partitions.
Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table
partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table
portioning.

15. How many extra partitions are created and why?


Two partitions are created for date before the begin date and after the end date.

16. What are the options available in transfer rule?


InfoObject
Constant
Routine
Formula

17. How would you optimize the dimensions?


We should define as many dimensions as possible and we have to take care that no single
dimension crosses more than 20% of the fact table size.

18. What are Conversion Routines for units and currencies in the update rule?
Using this option we can write ABAP code for Units / Currencies conversion. If we enable
this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For
example, we can convert units in Pounds to Kilos.

19. Can an InfoObject be an InfoProvider, how and why?


Yes, when we want to report on Characteristics or Master Data. We have to right click on
the InfoArea and select "Insert characteristic as data target". For example, we can make
0CUSTOMER as an InfoProvider and report on it.

20. What is Open Hub Service?


The Open Hub Service enables us to distribute data from an SAP BW system into external
Data Marts, analytical applications, and other applications. We can ensure controlled
distribution using several systems. The central object for exporting data is the InfoSpoke.
We can define the source and the target object for the data. BW becomes a hub of an
enterpris.
21. How do you transform Open Hub Data?
Using BADI we can transform Open Hub Data according to the destination requirement.

22. What is ODS?


Operational DataSource is used for detailed storage of data. We can overwrite data in the
ODS. The data is stored in transparent tables.

23. What are BW Statistics and what is its use?


They are group of Business Content InfoCubes which are used to measure performance
for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and
Warehouse management.

24. What are the steps to extract data from R/3?


Replicate DataSources
Assign InfoSources
Maintain Communication Structure and Transfer rules
Create and InfoPackage
Load Data

25. What are the delta options available when you load from flat file?
The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)

SOME QUESTION AND ANSWER:

Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).

Q) WHERE THE PSA DATA IS STORED?


In PSA table.

Q) WHAT IS DATA SIZE?


The volume of data one data target holds (in no. of records)

Q) Different types of INFOCUBES.


Basic, Virtual (remote, sap remote and multi)

Virtual Cube is used for example, if you consider railways reservation all the information has
to be updated online. For designing the Virtual cube you have to write the function module
that is linking to table, Virtual cube it is like a the structure, when ever the table is updated
the virtual cube will fetch the data from table and display report Online... FYI.. you will get
the information : https://www.sdn.sap.com/sdn /index.sdn and search for Designing Virtual
Cube and you will get a good material designing the Function Module
Q) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects with masterdata.

Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE


THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW

Q) ROUTINES?
Exist in the InfoObject, transfer routines, update routines and start routine

Q) BRIEF SOME STRUCTURES USED IN BEX.


Rows and Columns, you can create structures.

Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?


Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic
values.

Variable Types are

Manual entry /default value


Replacement path
SAP exit
Customer exit
Authorization

Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?


You can drill down to any level by using Navigational attributes and jump targets.

Q) WHAT ARE INDEXES?


Indexes are data base indexes, which help in retrieving data fastly.

Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help! Refer documentation

Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?


No.
Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
KPI's indicate the performance of a company. These are key figures

Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.


After image (correct me if I am wrong)

Q) REPORTING AND RESTRICTIONS.


Help! Refer documentation.

Q) TOOLS USED FOR PERFORMANCE TUNING.


ST22, Number ranges, delete indexes before load. Etc

Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA


DAILY.
There should be some tool to run the job daily (SM37 jobs)

Q) AUTHORIZATIONS.
Profile generator

Q) WEB REPORTING.
What are you expecting??

Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.


Of course

Q) PROCEDURES OF REPORTING ON MULTICUBES


Refer help. What are you expecting? MultiCube works on Union condition

Q) EXPLAIN TRANPSORTATION OF OBJECTS?


Dev--- Q and Dev------- P

Q) What types of partitioning are there for BW?

There are two Partitioning Performance aspects for BW (Cube & PSA)
Query Data Retrieval Performance Improvement:
Partitioning by (say) Date Range improves data retrieval by making best use of database
[data range] execution plans and indexes (of say Oracle database engine).
B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data element sizes. Improves data loading
into PSA and Cubes by infopackages (Eg. without timeouts).

Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are
there any standard procedures for checking them or matching the number of records?

A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of
records extracted. Then go to BW Monitor to check the number of records in the PSA and
check to see if it is the same & also in the monitor header tab.
A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems
in R/3. It is simple to use, but only really tells you if the extractor works. Since records that
get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able
to determine what is in the Cube compared to what is in the R/3 environment. You will need
to compare records on a 1:1 basis against records in R/3 transactions for the
functional area in question. I would recommend enlisting the help of the end user
community to assist since they presumably know the data.

To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will
see the record count, you can also go to display that data. You are not modifying anything
so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you
how many records should be expected in BW for a given load. You have that information in
the monitor RSMO during and after data loads. From RSMO for a given load you can
determine how many records were passed through the transfer rules from R/3, how many
targets were updated, and how many records passed through the Update Rules. It also
gives you error messages from the PSA.

Q) Types of Transfer Rules?

A) Field to Field mapping, Constant, Variable & routine.

Q) Types of Update Rules?

A) (Check box), Return table

Q) Transfer Routine?

A) Routines, which we write in, transfer rules.

Q) Update Routine?
A) Routines, which we write in Update rules

Q) What is the difference between writing a routine in transfer rules and writing a routine in
update rules?

A) If you are using the same InfoSource to update data in more than one data target its
better u write in transfer rules because u can assign one InfoSource to more than one data
target & and what ever logic u write in update rules it is specific to particular one data target.

Q) Routine with Return Table.

A) Update rules generally only have one return value. However, you can create a routine in
the tab strip key figure calculation, by choosing checkbox Return table. The corresponding
key figure routine then no longer has a return value, but a return table. You can then
generate as many key figure values, as you like from one data record.

Q) Start routines?

A) Start routines u can write in both updates rules and transfer rules, suppose you want to
restrict (delete) some records based on conditions before getting loaded into data targets,
then you can specify this in update rules-start routine.

Ex: - Delete Data_Package ani ante it will delete a record based on the condition

Q) X & Y Tables?

X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.

Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.

There are four types of sid tables

X time independent navigational attributes sid tables

Y time dependent navigational attributes sid tables

H hierarchy sid tables

I hierarchy structure sid tables


Q) Filters & Restricted Key figures (real time example)

Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing
documents as RKF's.

Q) Line-Item Dimension (give me an real time example)

Line-Item Dimension: Invoice no: or Doc no: is a real time example

Q) What does the number in the 'Total' column in Transaction RSA7 mean?

A) The 'Total' column displays the number of LUWs that were written in the delta queue and
that have not yet been confirmed. The number includes the LUWs of the last delta request
(for repetition of a delta request) and the LUWs for the next delta request. A LUW only
disappears from the RSA7 display when it has been transferred to the BW System and a
new delta request has been received from the BW System.

Q) How to know in which table (SAP BW) contains Technical Name / Description and
creation data of a particular Reports. Reports that are created using BEx Analyzer.

A) There is no such table in BW if you want to know such details while you are opening a
particular query press properties button you will come to know all the details that you
wanted.

You will find your information about technical names and description about queries in the
following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting
component elements (Table RSZELTDIR) for workbooks and the connections to queries
check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel
Workbooks in InfoCatalog (Table RSRWBINDEXT)

Q) What is a LUW in the delta queue?

A) A LUW from the point of view of the delta queue can be an individual document, a group
of documents from a collective run or a whole data packet of an application extractor.

Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7
differ from the number of data records that is displayed when you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also first
question) that were written to the qRFC queue and that have not yet been confirmed. The
detail screen displays the records contained in the LUWs. Both, the records belonging to the
previous delta request and the records that do not meet the selection conditions of the
preceding delta init requests are filtered out. Thus, only the records that are ready for the
next delta request are displayed on the detail screen. In the detail screen of Transaction
RSA7, a possibly existing customer exit is not taken into account.

Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful
delta loading?

A) Only when a new delta has been requested does the source system learn that the
previous delta was successfully loaded to the BW System. Then, the LUWs of the previous
delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a
possible delta request repetition. In particular, the number on the overview screen does not
change when the first delta was loaded to the BW System.

Q) Why are selections not taken into account when the delta queue is filled?

A) Filtering according to selections takes place when the system reads from the delta
queue. This is necessary for reasons of performance.

Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been
loaded successfully?

It is most likely that this is a DataSource that does not send delta data to the BW System via
the delta queue but directly via the extractor (delta for master data using ALE change
pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with
BW 2.0B Support Package 11.

Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading
procedure from the delta queue?

A) The impact is limited. If performance problems are related to the loading process from the
delta queue, then refer to the application-specific notes (for example in the CO-PA area, in
the logistics cockpit area and so on).

Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for
the delta queue as for a full update. Please note, however, that LUWs are not split during
data loading for consistency reasons. This means that when very large LUWs are written to
the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and
MAXLINES parameters.

Q) Why does it take so long to display the data in the delta queue (for example
approximately 2 hours)?

A) With Plug In 2001.1 the display was changed: the user has the option of defining the
amount of data to be displayed, to restrict it, to selectively choose the number of a data
record, to make a distinction between the 'actual' delta data and the data intended for
repetition and so on.

Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What
exactly is deleted?

A) You should act with extreme caution when you use the deletion function in the delta
queue. It is comparable to deleting an InitDelta in the BW System and should preferably be
executed there. You do not only delete all data of this DataSource for the affected BW
System, but also lose the entire information concerning the delta initialization. Then you can
only request new deltas after another delta initialization.

When you delete the data, the LUWs kept in the qRFC queue for the corresponding target
system are confirmed. Physical deletion only takes place in the qRFC outbound queue if
there are no more references to the LUWs.

The deletion function is for example intended for a case where the BW System, from which
the delta initialization was originally executed, no longer exists or can no longer be
accessed.

Q) Why does it take so long to delete from the delta queue (for example half a day)?

A) Import PlugIn 2000.2 patch 3. With this patch the performance during deletion is
considerably improved.

Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit
area?

A) It is most likely that a delta initialization had not yet run or that the delta initialization was
not successful. A successful delta initialization (the corresponding request must have QM
status 'green' in the BW System) is a prerequisite for the application data being written in the
delta queue.

Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal queue name
must be used for selection on the initial screen of the qRFC monitor. This is made up of the
prefix 'BW, the client and the short name of the DataSource. For DataSources whose name
are 19 characters long or shorter, the short name corresponds to the name of the
DataSource. For DataSources whose name is longer than 19 characters (for delta-capable
DataSources only possible as of PlugIn 2001.1) the short name is assigned in table
ROOSSHORTN.

In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover,
the data of a LUW is displayed in an unstructured manner there.

Q) Why are the data in the delta queue although the V3 update was not started?

A) Data was posted in background. Then, the records are updated directly in the delta
queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS).
There is no duplicate transfer of records to the BW system. See Note 417189.

Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data
loaded into BW during the last delta but also data that were newly added, i.e. 'pure' delta
records?

A) Was programmed in a way that the request in repeat mode fetches both actually
repeatable (old) data and new data from the source system.

Q) I loaded several delta inits with various selections. For which one is the delta loaded?

A) For delta, all selections made via delta inits are summed up. This means, a delta for the
'total' of all delta initializations is loaded.

Q) How many selections for delta inits are possible in the system?

A) With simple selections (intervals without complicated join conditions or single values),
you can make up to about 100 delta inits. It should not be more.

With complicated selection conditions, it should be only up to 10-20 delta inits.

Reason: With many selection conditions that are joined in a complicated way, too many
'where' lines are generated in the generated ABAP source code that may exceed the
memory limit.
Q) I intend to copy the source system, i.e. make a client copy. What will happen with may
delta? Should I initialize again after that?

A) Before you copy a source client or source system, make sure that your deltas have been
fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an
inconsistency might occur between BW delta tables and the OLTP delta tables as described
in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the
OLTP since this table is client-independent. After the system copy, the table will contain the
entries with the old logical system name that are no longer useful for further delta loading
from the new logical system. The delta must be initialized in any case since delta depends
on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X'
occurs in BW when editing or creating an InfoPackage, you should expect that the delta
have to be initialized after the copy.

Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?

A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues
only after informing the BW Support or only if this is explicitly requested in a note for
component 'BC-BW' or 'BW-WHM-SAPI'.

Q) Despite of the delta request being started after completion of the collective run (V3
update), it does not contain all documents. Only another delta request loads the missing
documents into BW. What is the cause for this "splitting"?

A) The collective run submits the open V2 documents for processing to the task handler,
which processes them in one or several parallel update processes in an asynchronous way.
For this reason, plan a sufficiently large "safety time window" between the end of the
collective run in the source system and the start of the delta request in BW. An alternative
solution where this problem does not occur is described in Note 505700.

Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?

A) In general, delta initializations and deletions of delta inits should always be carried out at
a time when no posting takes place. Otherwise, buffer problems may occur: If a user started
the internal mode at a time when the delta initialization was still active, he/she posts data
into the queue even though the initialization had been deleted in the meantime. This is the
case in your system.

Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT,


some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'.
What do these statuses mean? Which values in the field 'Status' mean what and
which values are correct and which are alarming? Are the statuses BW-specific or generally
valid in qRFC?

A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was
read once either in a delta request or in a repetition of the delta request. However, this does
not mean that the record has successfully reached the BW yet. The status READY in the
TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written
into the DeltaQueue and will be loaded into the BW with the next delta request or a
repetition of a delta. In any case only the statuses READ, READY and RECORDED in both
tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur
temporarily. It is set before starting a DeltaExtraction for all records with status READ
present at that time. The records with status EXECUTED are usually deleted from the queue
in packages within a delta request directly after setting the status before extracting a
new delta. If you see such records, it means that either a process which is confirming and
deleting records which have been loaded into the BW is successfully running at the moment,
or, if the records remain in the table for a longer period of time with status EXECUTED, it is
likely that there are problems with deleting the records which have already been
successfully been loaded into the BW. In this state, no more deltas are loaded into the BW.
Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means
nothing (see note 378903).

The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.

Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new
delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows
that some fields were moved. The same result occurs when the contents of the DeltaQueue
are listed via the detail display. Why are the data displayed differently? What can be done?

Make sure that the change of the extract structure is also reflected in the database and that
all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC.
If the extract structure change is not communicated synchronously to the server where delta
records are being created, the records are written with the old structure until the new
structure has been generated. This may have disastrous consequences for the delta.

When the problem occurs, the delta needs to be re-initialized.

Q) How and where can I control whether a repeat delta is requested?

A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next
load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the
request in the monitor to red manually. For the contents of the repeat see Question 14.
Delta requests set to red despite of data being already updated lead to duplicate records in
a subsequent repeat, if they have not been deleted from the data targets concerned before.
Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which
update method is recommended in logistics? According to which criteria should the decision
be made? How can I choose an update method in logistics?

See the recommendation in Note 505700.

Q) Are there particular recommendations regarding the data volume the DeltaQueue may
grow to without facing the danger of a read failure due to memory problems?

A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT
counter in the LUW management table - which is of no practical importance, however - or
the restrictions regarding the volume and number of records in a database table).

When estimating "smooth" limits, both the number of LUWs is important and the average
data volume per LUW. As a rule, we recommend to bundle data (usually documents)
already when writing to the DeltaQueue to keep number of LUWs small (partly this can be
set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW
should not be considerably larger than 10% of the memory available to the work process for
data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work
process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small
practical importance as well since a comparable limit already applies when writing to
the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases.

If the number of LUWs cannot be reduced by bundling application transactions, you should
at least make sure that the data are fetched from all connected BWs as quickly as possible.
But for other, BW-specific, reasons, the frequency should not be higher than one
DeltaRequest per hour.

To avoid memory problems, a program-internal limit ensures that never more than 1 million
LUWs are read and fetched from the database per DeltaRequest. If this limit is reached
within a request, the DeltaQueue must be emptied by several successive DeltaRequests.
We recommend, however, to try not to reach that limit but trigger the fetching of data from
the connected BWs already when the number of LUWs reaches a 5-digit value.
Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE
PROCEDURE IN LO-COCKPIT?
No LIS in LO cockpit. We will have datasources and can be maintained (append fields).
Refer white paper on LO-Cockpit extractions.

Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?

A) Initially we don't delete the setup tables but when we do change in extract structure we
go for it. We r changing the extract structure right, that means there are some newly added
fields in that which r not before. So to get the required data ( i.e.; the data which is required
is taken and to avoid redundancy) we delete n then fill the setup tables.
To refresh the statistical data. The extraction set up reads the dataset that you want to
process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant
communication structure with the data. The data is stored in cluster tables from where it is
read when the initialization is run. It is important that during initialization phase, no one
generates or modifies application data, at least until the tables can be set up.

Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).

Q) WHERE THE PSA DATA IS STORED?


In PSA table.

Q) WHAT IS DATA SIZE?


The volume of data one data target holds (in no. of records)

Q) Different types of INFOCUBES.


Basic, Virtual (remote, sap remote and multi)

Virtual Cube is used for example, if you consider railways reservation all the information has
to be updated online. For designing the Virtual cube you have to write the function module
that is linking to table, Virtual cube it is like a the structure, when ever the table is updated
the virtual cube will fetch the data from table and display report Online... FYI.. you will get
the information : https://www.sdn.sap.com/sdn /index.sdn and search for Designing Virtual
Cube and you will get a good material designing the Function Module

Q) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects with masterdata.

Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE


THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW

Q) ROUTINES?
Exist in the InfoObject, transfer routines, update routines and start routine

Q) BRIEF SOME STRUCTURES USED IN BEX.


Rows and Columns, you can create structures.

Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?


Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic
values.

Variable Types are

Manual entry /default value


Replacement path
SAP exit
Customer exit
Authorization

Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?


You can drill down to any level by using Navigational attributes and jump targets.

Q) WHAT ARE INDEXES?


Indexes are data base indexes, which help in retrieving data fastly.

Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help! Refer documentation

Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?


No.

Q) WHAT IS THE SIGNIFICANCE OF KPI'S?


KPI's indicate the performance of a company. These are key figures

Q) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.


After image (correct me if I am wrong)

Q) REPORTING AND RESTRICTIONS.


Help! Refer documentation.

Q) TOOLS USED FOR PERFORMANCE TUNING.


ST22, Number ranges, delete indexes before load. Etc

Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA


DAILY.
There should be some tool to run the job daily (SM37 jobs)

Q) AUTHORIZATIONS.
Profile generator

Q) WEB REPORTING.
What are you expecting??

Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.


Of course

Q) PROCEDURES OF REPORTING ON MULTICUBES


Refer help. What are you expecting? MultiCube works on Union condition

Q) EXPLAIN TRANPSORTATION OF OBJECTS?


Dev--- Q and Dev------- P

Q) What types of partitioning are there for BW?

There are two Partitioning Performance aspects for BW (Cube & PSA)
Query Data Retrieval Performance Improvement:
Partitioning by (say) Date Range improves data retrieval by making best use of database
[data range] execution plans and indexes (of say Oracle database engine).
B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data element sizes. Improves data loading
into PSA and Cubes by infopackages (Eg. without timeouts).

Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are
there any standard procedures for checking them or matching the number of records?

A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of
records extracted. Then go to BW Monitor to check the number of records in the PSA and
check to see if it is the same & also in the monitor header tab.
A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems
in R/3. It is simple to use, but only really tells you if the extractor works. Since records that
get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able
to determine what is in the Cube compared to what is in the R/3 environment. You will need
to compare records on a 1:1 basis against records in R/3 transactions for the
functional area in question. I would recommend enlisting the help of the end user
community to assist since they presumably know the data.

To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will
see the record count, you can also go to display that data. You are not modifying anything
so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you
how many records should be expected in BW for a given load. You have that information in
the monitor RSMO during and after data loads. From RSMO for a given load you can
determine how many records were passed through the transfer rules from R/3, how many
targets were updated, and how many records passed through the Update Rules. It also
gives you error messages from the PSA.

Q) Types of Transfer Rules?

A) Field to Field mapping, Constant, Variable & routine.

Q) Types of Update Rules?

A) (Check box), Return table

Q) Transfer Routine?

A) Routines, which we write in, transfer rules.

Q) Update Routine?

A) Routines, which we write in Update rules

Q) What is the difference between writing a routine in transfer rules and writing a routine in
update rules?

A) If you are using the same InfoSource to update data in more than one data target its
better u write in transfer rules because u can assign one InfoSource to more than one data
target & and what ever logic u write in update rules it is specific to particular one data target.

Q) Routine with Return Table.


A) Update rules generally only have one return value. However, you can create a routine in
the tab strip key figure calculation, by choosing checkbox Return table. The corresponding
key figure routine then no longer has a return value, but a return table. You can then
generate as many key figure values, as you like from one data record.

Q) Start routines?

A) Start routines u can write in both updates rules and transfer rules, suppose you want to
restrict (delete) some records based on conditions before getting loaded into data targets,
then you can specify this in update rules-start routine.

Ex: - Delete Data_Package ani ante it will delete a record based on the condition

Q) X & Y Tables?

X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.

Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.

There are four types of sid tables

X time independent navigational attributes sid tables

Y time dependent navigational attributes sid tables

H hierarchy sid tables

I hierarchy structure sid tables

Q) Filters & Restricted Key figures (real time example)

Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing
documents as RKF's.

Q) Line-Item Dimension (give me an real time example)

Line-Item Dimension: Invoice no: or Doc no: is a real time example


Q) What does the number in the 'Total' column in Transaction RSA7 mean?

A) The 'Total' column displays the number of LUWs that were written in the delta queue and
that have not yet been confirmed. The number includes the LUWs of the last delta request
(for repetition of a delta request) and the LUWs for the next delta request. A LUW only
disappears from the RSA7 display when it has been transferred to the BW System and a
new delta request has been received from the BW System.

Q) How to know in which table (SAP BW) contains Technical Name / Description and
creation data of a particular Reports. Reports that are created using BEx Analyzer.

A) There is no such table in BW if you want to know such details while you are opening a
particular query press properties button you will come to know all the details that you
wanted.

You will find your information about technical names and description about queries in the
following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting
component elements (Table RSZELTDIR) for workbooks and the connections to queries
check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel
Workbooks in InfoCatalog (Table RSRWBINDEXT)

Q) What is a LUW in the delta queue?

A) A LUW from the point of view of the delta queue can be an individual document, a group
of documents from a collective run or a whole data packet of an application extractor.

Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7
differ from the number of data records that is displayed when you call the detail view?

A) The number on the overview screen corresponds to the total of LUWs (see also first
question) that were written to the qRFC queue and that have not yet been confirmed. The
detail screen displays the records contained in the LUWs. Both, the records belonging to the
previous delta request and the records that do not meet the selection conditions of the
preceding delta init requests are filtered out. Thus, only the records that are ready for the
next delta request are displayed on the detail screen. In the detail screen of Transaction
RSA7, a possibly existing customer exit is not taken into account.

Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful
delta loading?

A) Only when a new delta has been requested does the source system learn that the
previous delta was successfully loaded to the BW System. Then, the LUWs of the previous
delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a
possible delta request repetition. In particular, the number on the overview screen does not
change when the first delta was loaded to the BW System.

Q) Why are selections not taken into account when the delta queue is filled?

A) Filtering according to selections takes place when the system reads from the delta
queue. This is necessary for reasons of performance.

Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been
loaded successfully?

It is most likely that this is a DataSource that does not send delta data to the BW System via
the delta queue but directly via the extractor (delta for master data using ALE change
pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with
BW 2.0B Support Package 11.

Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading
procedure from the delta queue?

A) The impact is limited. If performance problems are related to the loading process from the
delta queue, then refer to the application-specific notes (for example in the CO-PA area, in
the logistics cockpit area and so on).

Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for
the delta queue as for a full update. Please note, however, that LUWs are not split during
data loading for consistency reasons. This means that when very large LUWs are written to
the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and
MAXLINES parameters.

Q) Why does it take so long to display the data in the delta queue (for example
approximately 2 hours)?

A) With Plug In 2001.1 the display was changed: the user has the option of defining the
amount of data to be displayed, to restrict it, to selectively choose the number of a data
record, to make a distinction between the 'actual' delta data and the data intended for
repetition and so on.

Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What
exactly is deleted?
A) You should act with extreme caution when you use the deletion function in the delta
queue. It is comparable to deleting an InitDelta in the BW System and should preferably be
executed there. You do not only delete all data of this DataSource for the affected BW
System, but also lose the entire information concerning the delta initialization. Then you can
only request new deltas after another delta initialization.

When you delete the data, the LUWs kept in the qRFC queue for the corresponding target
system are confirmed. Physical deletion only takes place in the qRFC outbound queue if
there are no more references to the LUWs.

The deletion function is for example intended for a case where the BW System, from which
the delta initialization was originally executed, no longer exists or can no longer be
accessed.

Q) Why does it take so long to delete from the delta queue (for example half a day)?

A) Import PlugIn 2000.2 patch 3. With this patch the performance during deletion is
considerably improved.

Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit
area?

A) It is most likely that a delta initialization had not yet run or that the delta initialization was
not successful. A successful delta initialization (the corresponding request must have QM
status 'green' in the BW System) is a prerequisite for the application data being written in the
delta queue.

Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?

A) The qRFC monitor basically displays the same data as RSA7. The internal queue name
must be used for selection on the initial screen of the qRFC monitor. This is made up of the
prefix 'BW, the client and the short name of the DataSource. For DataSources whose name
are 19 characters long or shorter, the short name corresponds to the name of the
DataSource. For DataSources whose name is longer than 19 characters (for delta-capable
DataSources only possible as of PlugIn 2001.1) the short name is assigned in table
ROOSSHORTN.

In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover,
the data of a LUW is displayed in an unstructured manner there.

Q) Why are the data in the delta queue although the V3 update was not started?
A) Data was posted in background. Then, the records are updated directly in the delta
queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS).
There is no duplicate transfer of records to the BW system. See Note 417189.

Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data
loaded into BW during the last delta but also data that were newly added, i.e. 'pure' delta
records?

A) Was programmed in a way that the request in repeat mode fetches both actually
repeatable (old) data and new data from the source system.

Q) I loaded several delta inits with various selections. For which one is the delta loaded?

A) For delta, all selections made via delta inits are summed up. This means, a delta for the
'total' of all delta initializations is loaded.

Q) How many selections for delta inits are possible in the system?

A) With simple selections (intervals without complicated join conditions or single values),
you can make up to about 100 delta inits. It should not be more.

With complicated selection conditions, it should be only up to 10-20 delta inits.

Reason: With many selection conditions that are joined in a complicated way, too many
'where' lines are generated in the generated ABAP source code that may exceed the
memory limit.

Q) I intend to copy the source system, i.e. make a client copy. What will happen with may
delta? Should I initialize again after that?

A) Before you copy a source client or source system, make sure that your deltas have been
fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an
inconsistency might occur between BW delta tables and the OLTP delta tables as described
in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the
OLTP since this table is client-independent. After the system copy, the table will contain the
entries with the old logical system name that are no longer useful for further delta loading
from the new logical system. The delta must be initialized in any case since delta depends
on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X'
occurs in BW when editing or creating an InfoPackage, you should expect that the delta
have to be initialized after the copy.
Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?

A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues
only after informing the BW Support or only if this is explicitly requested in a note for
component 'BC-BW' or 'BW-WHM-SAPI'.

Q) Despite of the delta request being started after completion of the collective run (V3
update), it does not contain all documents. Only another delta request loads the missing
documents into BW. What is the cause for this "splitting"?

A) The collective run submits the open V2 documents for processing to the task handler,
which processes them in one or several parallel update processes in an asynchronous way.
For this reason, plan a sufficiently large "safety time window" between the end of the
collective run in the source system and the start of the delta request in BW. An alternative
solution where this problem does not occur is described in Note 505700.

Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?

A) In general, delta initializations and deletions of delta inits should always be carried out at
a time when no posting takes place. Otherwise, buffer problems may occur: If a user started
the internal mode at a time when the delta initialization was still active, he/she posts data
into the queue even though the initialization had been deleted in the meantime. This is the
case in your system.

Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT,


some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'.
What do these statuses mean? Which values in the field 'Status' mean what and
which values are correct and which are alarming? Are the statuses BW-specific or generally
valid in qRFC?

A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was
read once either in a delta request or in a repetition of the delta request. However, this does
not mean that the record has successfully reached the BW yet. The status READY in the
TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written
into the DeltaQueue and will be loaded into the BW with the next delta request or a
repetition of a delta. In any case only the statuses READ, READY and RECORDED in both
tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur
temporarily. It is set before starting a DeltaExtraction for all records with status READ
present at that time. The records with status EXECUTED are usually deleted from the queue
in packages within a delta request directly after setting the status before extracting a
new delta. If you see such records, it means that either a process which is confirming and
deleting records which have been loaded into the BW is successfully running at the moment,
or, if the records remain in the table for a longer period of time with status EXECUTED, it is
likely that there are problems with deleting the records which have already been
successfully been loaded into the BW. In this state, no more deltas are loaded into the BW.
Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means
nothing (see note 378903).

The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.

Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new
delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows
that some fields were moved. The same result occurs when the contents of the DeltaQueue
are listed via the detail display. Why are the data displayed differently? What can be done?

Make sure that the change of the extract structure is also reflected in the database and that
all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC.
If the extract structure change is not communicated synchronously to the server where delta
records are being created, the records are written with the old structure until the new
structure has been generated. This may have disastrous consequences for the delta.

When the problem occurs, the delta needs to be re-initialized.

Q) How and where can I control whether a repeat delta is requested?

A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next
load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the
request in the monitor to red manually. For the contents of the repeat see Question 14.
Delta requests set to red despite of data being already updated lead to duplicate records in
a subsequent repeat, if they have not been deleted from the data targets concerned before.

Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which
update method is recommended in logistics? According to which criteria should the decision
be made? How can I choose an update method in logistics?

See the recommendation in Note 505700.

Q) Are there particular recommendations regarding the data volume the DeltaQueue may
grow to without facing the danger of a read failure due to memory problems?

A) There is no strict limit (except for the restricted number range of the 24-digit QCOUNT
counter in the LUW management table - which is of no practical importance, however - or
the restrictions regarding the volume and number of records in a database table).
When estimating "smooth" limits, both the number of LUWs is important and the average
data volume per LUW. As a rule, we recommend to bundle data (usually documents)
already when writing to the DeltaQueue to keep number of LUWs small (partly this can be
set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW
should not be considerably larger than 10% of the memory available to the work process for
data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work
process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small
practical importance as well since a comparable limit already applies when writing to
the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases.

If the number of LUWs cannot be reduced by bundling application transactions, you should
at least make sure that the data are fetched from all connected BWs as quickly as possible.
But for other, BW-specific, reasons, the frequency should not be higher than one
DeltaRequest per hour.

To avoid memory problems, a program-internal limit ensures that never more than 1 million
LUWs are read and fetched from the database per DeltaRequest. If this limit is reached
within a request, the DeltaQueue must be emptied by several successive DeltaRequests.
We recommend, however, to try not to reach that limit but trigger the fetching of data from
the connected BWs already when the number of LUWs reaches a 5-digit value.

You might also like