Professional Documents
Culture Documents
Direct Delta:
=========
With each document posting the extraction data is transferred directly to the BW Delta
Queue.
Here each document posted is converted to one LUW.
_______________________________________________________________
________________________
Queued Delta:
==========
With queued delta update mode, the extraction data is written in an extraction queue and is
transferred to the BW Delta Queues by an update collective run.
Upto 10000 document delta/change to one LUW are cumulated per Datasource in the BW
Delta Queue.
Unserialized V3 Update :
==================
The extraction data continues to be written to the update tables using a V3 update module
and then is read and processed by a collective update run through LBWE.
Data is read in the update collective run without taking the sequence number into account.
Xxxxxxxxxxxxxxxxxx
Generic Extraction
Log on sap R/3.
Step 1; create a table or view for generic extraction in se11.
Step 2; then goto RSO2 t-code.
Step 3; Here u have to decide whether to extract transaction data or master data attributes
or texts.
Step 4; suppose if u have opted for transaction data,then give name in the column
ex:ztd_m(data source name)
Step 5; select create button,this will take u to another screen.
Step 6; Here u have to decide from which application component u r extacting data.Ex:
SD,MM,..
Step 7; in this screen u have to fill short disciption,medium,long( these r mandatory).
Step 8; then U have to Table name or view name which u have created in se11.
Step 9; I f u want to maintain generic delta then u can select generic delta in top left hand
side corner.
Step 10; in next screen u have to give some field which is primary key.
step 11; here u to specify whether time stamp or cal day or numeric pointer depending on u r
requirement.
step 12; then u have to specify whether new status for changed records or additive delta.
If u choose additive delta ,then u have to load data to infocube or ods object.
If u choose new status for changed records, then u to load data to ods object only.
Step 13; then save it.
Step 14; then logon to sap bw ,then replicate the data source then as usuall.
Xxxxxxxxxxxxxxxxxxxxxxx
Copy the all the transformations Routines code to other documents as safer side. Because
after migrate, all the ABAP code are shifted to OOPs code.
Please make sure that data source should migrate at the last.
1.First you have to copy the info provider and make another copy with Z name ( like original
name 0FIGL_O02 and make it ZFIGL_O02)
Please give the Z name and also put it in the Z info area.
2.Then first we have to migrate the Update rule with Z info source. Please make a separate
copy off transformation routine will require later on. Like
PROGRAM UPDATE_ROUTINE.
*$*$ begin of global - insert your declaration only below this line *-*
* TABLES: ...
* DATA: ...
*$*$ end of global - insert your declaration only before this line *-*
FORM compute_data_field
TABLES MONITOR STRUCTURE RSMONITOR "user defined monitoring
USING COMM_STRUCTURE LIKE /BIC/CS80FIAR_O03
RECORD_NO LIKE SY-TABIX
RECORD_ALL LIKE SY-TABIX
SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
CHANGING RESULT LIKE /BI0/V0FIAR_C03T-NETTAKEN
RETURNCODE LIKE SY-SUBRC
ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
*
*$*$ begin of routine - insert your code only below this line *-*
* fill the internal table "MONITOR", to make monitor entries
* result value of the routine
IF COMM_STRUCTURE-FI_DOCSTAT EQ 'C'.
RESULT = COMM_STRUCTURE-CLEAR_DATE - COMM_STRUCTURE-NETDUEDATE.
ELSE.
RESULT = 0.
endif.
* if the returncode is not equal zero, the result will not be updated
RETURNCODE = 0.
* if abort is not equal zero, the update process will be canceled
ABORT = 0.
*$*$ end of routine - insert your code only before this line *-*
*
ENDFORM.
Then use Copy info Source 3.x New Info source option to make new copy of the info source
Then give the Z name for that info source which will help you to make new copy of the info
source.
3.Then Map and Activated ( Most of the fields are mapped automatically )
4.Then Right click transfer rules, additional functions --> Create transformations
Please assigned newly created info source with Use available infosource Option.
6.Then map and activated.
8.Now Your Migration is completed now just look at the Routine code.
Some Tips:
Do not use:
DATA:BEGIN OF itab OCCURS n,
fields...,
END OF itab.
REPLACED by
TYPES:BEGIN OF line_type,
fields...,
END OF line_type.
DATA itab TYPE TABLE OF line_type INITIAL SIZE n.
Internal tables with header lines are not allowed. Header line in an internal table is a default
line that the system uses when looping through the internal table
Short forms of internal table line operations are not allowed. For example, you cannot use
the syntax INSERT TABLE itab. However, you can use INSERT wa INTO TABLE itab
Transformations do not permit READ itab statement in which the system reads values from
header lines.
For example, the code READ TABLE itab. is now outdated, but you could use the code
READ TABLE itab WITH KEY . . . INTO wa.
Calling external subroutines using the syntax PERFORM FORM(PROG) is not allowed.
In this example, FORM is a subroutine in the program PROG.
REPLY ME IF U LIKE.. :)
Interview Questions
The following are some of Best interview questions for SAP BI Job seekers..
How can I compare data in R/3 with data in a BW Cube after the daily delta
loads? Are there any standard procedures for checking them or matching the
number of records?
Go to R/3 TC RSA3 and run the extractor. It will give you the number of records extracted.
Then go to BW Monitor to check the number of records in the PSA and check to see if it is
the same.
RSA3 is a simple extractor checker program that allows you to rule out extract problems in
R/3. It is simple to use, but only really tells you if the extractor works. Since records that get
updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to
determine what is in the Cube compared to what is in the R/3 environment. You will need to
compare records on a 1:1 basis against records in R/3 transactions for the functional area in
question. I would recommend enlisting the help of the end user community to assist since
they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click executes and you will
see the record count, you can also go to display that data. You are not modifying anything so
what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you
how many records should be expected in BW for a given load. You have that information in
the monitor RSMO during and after data loads. From RSMO for a given load you can
determine how many records were passed through the transfer rules from R/3, how many
targets were updated, and how many records passed through the Update Rules. It also gives
you error messages from the PSA.
It collects the data from R/3 and store it within the delta queue in R/3 unitll next delta
extract. The DataSource reads the Delta Queue data and push it to BW.
How do we see the data thats collected and stored in the delta queue?
Go to Transaction RSA7 on R/3 to view the contents of the delta queue. Note only extractors
that have been suscessfully initialized will show up in this queue
What is compression?
It is a process used to delete the Request IDs and this saves space
What is Rollup?
This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have
not performed a rollup then the new InfoCube data will not be available while reporting on
the aggregate
Display attribute is one, which is used only for display purpose in the report. Where as
navigational attribute is used for drilling down in the report. We don't need to maintain
Navigational attribute in the cube as a characteristic (that is the advantage) to drill down.
RSPC is the central transaction for all your process chain maintenance. Here you find on the
left existing process chains sorted by “application components”. The default mode is
planning view. There are two other views available: Check view and protocol view.
2.) Create a new process chain
To create a new process chain, press “Create” icon in planning view.
In the following pop-Up window you have to enter a technical name and a description of
your new process chain.
The technical name can be as long as up to 20 characters. Usually it starts with a Z or Y. See
your project internal naming conventions for it.
3.) Define a start process
After entering a process chain name and description, a new window pop-ups. You are asked
to define a start variant.
That’s the first step in your process chain! Every process chain does have one and only one
starting step. A new step of type “Start process” will be added. To be able to define unique
start processes for your chain you have to create a start variant. These steps you have to do
for any other of the subsequent steps. First drag a process type on the design window. Then
define a variant for this type and you have to create a process step. The formula is:
Process Type + Process Variant = Process Step!
If you save your chain, process chain name will be saved into table RSPCCHAIN. The
process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a
modified version.So press on the “create” button, a new pop-up appears:
Here you define a technical name for the start variant and a description. In the n ext step
you define when the process chain will start. You can choose from direct scheduling or start
using meta chain or API. With direct scheduling you can define either to start immediately
upon activating and scheduling or to a defined point in time like you know it from the job
scheduling in any SAP system. With “start using meta chain or API” you are able to start this
chain as a subchain or from an external application via a function module
“RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or
create a new one and you have successfully created the first step of your chain.
4.) Add a loading step
If you have defined the starting point for your chain you can add now a loading step for
loading master data or transaction data. For all of this data choose “Execute infopackage”
from all available process types. See picture below:
You can easily move this step with drag & drop from the left on the right side into your
design window.A new pop-up window appears. Here you can choose which infopackage you
want to use. You can’t create a new one here. Press F4 help and a new window will pop-up
with all available infoapckages sorted by use. At the top are infopackages used in this
process chain, followed by all other available infopackages not used in the process chain.
Choose one and confirm. This step will now be added to your process chain. Your chain
should look now like this:
How do you connect these both steps? One way is with right mouse click on the first step
and choose Connect with -> Load Data and then the infopackage you want to be the
successor.
Another possibility is to select the starting point and keep left mouse button pressed. Then
move mouse down to your target step. An arrow should follow your movement. Stop
pressing the mouse button and a new connection is created. From the Start process to every
second step it’s a black line.
5.) Add a DTP process
In BI 7.0 systems you can also add a DTP to your chain. From the process type window ( see
above.) you can choose “Data Transfer Process”. Drag & Drop it on the design window. You
will be asked for a variant for this step. Again as in infopackages press F4 help and choose
from the list of available DTPs the one you want to execute. Confirm your choice and a new
step for the DTP is added to your chain. Now you have to connect this step again with one of
its possible predecessors. As described above choose context menu and connect with ->
Data transfer process. But now a new pop-up window appears.
Here you can choose if this successor step shall be executed only if the predecessor was
successful, ended with errors or anyhow if successful or not always execute. With this
connection type you can control the behaviour of your chain in case of errors.
If a step ends successful or with errors is defined in the process step itself. To see the
settings for each step you can go to Settings -> Maintain Process Types in the menu. In this
window you see all defined (standard and custom ) process types. Choose Data transfer
process and display details in the menu. In the new window you can see:
DTP can have the possible event “Process ends “successful” or “incorrect”, has ID @VK@,
which actually means the icon and appears under category 10, which is “Load process and
post-processing”. Your process chain can now look like this:
You can now add all other steps necessary. By default the process chain itself suggests
successors and predecessors for each step. For loading transaction data with an infopackage
it usually adds steps for deleting and creating indexes on a cube. You can switch off this
behaviour in the menu under “Settings -> Default Chains". In the pop-up choose “Do not
suggest Process” and confirm.
On the left side you see when the chain was executed and how it ended. On the right side
you see for every step if it ended successfully or not. As you can see the two first steps were
successfull and step “Load Data” of an infopackage failed. You can now check the reason
with context menu “display messages” or “Process monitor”. “Display messages” displays
the job log of the background job and messages created by the request monitor. With
“Process monitor” you get to the request monitor and see detailed information why the
loading failed. THe logs are stored in tables RSPCLOGCHAIN and RSPCPROCESSLOG.
Generic Extraction
Log on sap R/3.
Step 1; create a table or view for generic extraction in se11.
Step 2; then goto RSO2 t-code.
Step 3; Here u have to decide whether to extract transaction data or master data attributes
or texts.
Step 4; suppose if u have opted for transaction data,then give name in the column
ex:ztd_m(data source name)
Step 5; select create button,this will take u to another screen.
Step 6; Here u have to decide from which application component u r extacting data.Ex:
SD,MM,..
Step 7; in this screen u have to fill short disciption,medium,long( these r mandatory).
Step 8; then U have to Table name or view name which u have created in se11.
Step 9; I f u want to maintain generic delta then u can select generic delta in top left hand
side corner.
Step 10; in next screen u have to give some field which is primary key.
step 11; here u to specify whether time stamp or cal day or numeric pointer depending on u r
requirement.
step 12; then u have to specify whether new status for changed records or additive delta.
If u choose additive delta ,then u have to load data to infocube or ods object.
If u choose new status for changed records, then u to load data to ods object only.
Step 13; then save it.
Step 14; then logon to sap bw ,then replicate the data source then as usuall.
to rsa6
change the data source and uncheck the new field in hide
and save it
next for CMOD
create
then double click on zxrsau02 here u can write the abap code
_______________________________________________________________
________________________
Queued Delta:
==========
With queued delta update mode, the extraction data is written in an extraction queue and is
transferred to the BW Delta Queues by an update collective run.
Upto 10000 document delta/change to one LUW are cumulated per Datasource in the BW
Delta Queue.
_______________________________________________________________
________________________
Unserialized V3 Update :
==================
The extraction data continues to be written to the update tables using a V3 update module
and then is read and processed by a collective update run through LBWE.
Data is read in the update collective run without taking the sequence number into account.