You are on page 1of 14

Delta Differences/ Advantages & Disadvantages

Direct Delta:
=========
With each document posting the extraction data is transferred directly to the BW Delta
Queue.
Here each document posted is converted to one LUW.

Direct Delta Advantages:


We dont need to schedule a job at regular (LBWE – Job Control) in order to transfer the
data to the BW Delta Queue.
Serialization of the document is ensured by using the enqueue concept.

Direct Delta Disadvantages:


Different document changes are not summarized into one LUW in the BW Delta Queue.
Hence the number of LUWs per datasource in the BW delta queue increases significantly.

Direct Delta Recommended For:


Customers with low occurrence of documents(max 10000 doc changes between 2 delta
extracttions). The reason for this is that a large number of LUWs can cause a dump during
the extraction process.

_______________________________________________________________
________________________

Queued Delta:
==========
With queued delta update mode, the extraction data is written in an extraction queue and is
transferred to the BW Delta Queues by an update collective run.
Upto 10000 document delta/change to one LUW are cumulated per Datasource in the BW
Delta Queue.

Queued Delta - Advantages:


Serialization is ensured using the enqueue concept.
Works good when the occurrence of documents is high.

Queued Delta - Disadvantages:


In this case we would need to schedule a job (LBWE-Job Control) to regularly transfer the
data to BW Delta Queue.
Subsequently additional monitoring of the extraction queue is required.

Queued Delta Recommended For:


Customers with high occurrence of documents (more than 10000 document changes
between delta extractions).
_______________________________________________________________
________________________

Unserialized V3 Update :
==================
The extraction data continues to be written to the update tables using a V3 update module
and then is read and processed by a collective update run through LBWE.
Data is read in the update collective run without taking the sequence number into account.

Unserialized V3 Update - Advantages:


Works good when the occurrence of documents is high.

Unserialized V3 Update - Disadvantages:


Serialization is not ensured.
In this case we would need to schedule a job (LBWE-Job Control) to regularly transfer the
data.
Subsequently additional monitoring of the extraction queue is required.

Unserialized V3 Update Recommended For:


Only if its irrelevant whether or not the extraction data is transferred to BW in exactly the
same sequence (serialization) in which the data was generated in R/3. Basically the
functional data flow should not require a correct temporal sequence.

Xxxxxxxxxxxxxxxxxx

Generic Extraction
Log on sap R/3.
Step 1; create a table or view for generic extraction in se11.
Step 2; then goto RSO2 t-code.
Step 3; Here u have to decide whether to extract transaction data or master data attributes
or texts.
Step 4; suppose if u have opted for transaction data,then give name in the column
ex:ztd_m(data source name)
Step 5; select create button,this will take u to another screen.
Step 6; Here u have to decide from which application component u r extacting data.Ex:
SD,MM,..
Step 7; in this screen u have to fill short disciption,medium,long( these r mandatory).
Step 8; then U have to Table name or view name which u have created in se11.
Step 9; I f u want to maintain generic delta then u can select generic delta in top left hand
side corner.
Step 10; in next screen u have to give some field which is primary key.
step 11; here u to specify whether time stamp or cal day or numeric pointer depending on u r
requirement.
step 12; then u have to specify whether new status for changed records or additive delta.
If u choose additive delta ,then u have to load data to infocube or ods object.
If u choose new status for changed records, then u to load data to ods object only.
Step 13; then save it.
Step 14; then logon to sap bw ,then replicate the data source then as usuall.

Xxxxxxxxxxxxxxxxxxxxxxx

Migration of BI 3.5 Modeling to BI 7.x


This post will give you the step by step guide to migrate the 3.5 modeling to new BI 7.x. We
have to migrate the all the modeling to BI 7.x because the business content provide the 3.x
modeling with transfer rule, update rule, info source and old (3.5) data source.

Following are the prerequisites for the migration.

Copy the info provider to Z info provider.

Copy the all the transformations Routines code to other documents as safer side. Because
after migrate, all the ABAP code are shifted to OOPs code.

Please make sure that data source should migrate at the last.

Please find the Step by Step guide with screen shot.

1.First you have to copy the info provider and make another copy with Z name ( like original
name 0FIGL_O02 and make it ZFIGL_O02)

Please give the Z name and also put it in the Z info area.

2.Then first we have to migrate the Update rule with Z info source. Please make a separate
copy off transformation routine will require later on. Like

Interest Calculation Numerator Days 1 (Agreed) KF

PROGRAM UPDATE_ROUTINE.
*$*$ begin of global - insert your declaration only below this line *-*
* TABLES: ...
* DATA: ...
*$*$ end of global - insert your declaration only before this line *-*
FORM compute_data_field
TABLES MONITOR STRUCTURE RSMONITOR "user defined monitoring
USING COMM_STRUCTURE LIKE /BIC/CS80FIAR_O03
RECORD_NO LIKE SY-TABIX
RECORD_ALL LIKE SY-TABIX
SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
CHANGING RESULT LIKE /BI0/V0FIAR_C03T-NETTAKEN
RETURNCODE LIKE SY-SUBRC
ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
*
*$*$ begin of routine - insert your code only below this line *-*
* fill the internal table "MONITOR", to make monitor entries
* result value of the routine
IF COMM_STRUCTURE-FI_DOCSTAT EQ 'C'.
RESULT = COMM_STRUCTURE-CLEAR_DATE - COMM_STRUCTURE-NETDUEDATE.
ELSE.
RESULT = 0.
endif.
* if the returncode is not equal zero, the result will not be updated
RETURNCODE = 0.
* if abort is not equal zero, the update process will be canceled
ABORT = 0.
*$*$ end of routine - insert your code only before this line *-*
*
ENDFORM.

Then Right click update rules , additional functions --->Create transformations

Then use Copy info Source 3.x New Info source option to make new copy of the info source

Then give the Z name for that info source which will help you to make new copy of the info
source.

3.Then Map and Activated ( Most of the fields are mapped automatically )

4.Then Right click transfer rules, additional functions --> Create transformations

Please assigned newly created info source with Use available infosource Option.
6.Then map and activated.

7.Then Right click datasource, click migrate ,click with export.

Please select only With export.

8.Now Your Migration is completed now just look at the Routine code.

Some Tips:

Do not use:
DATA:BEGIN OF itab OCCURS n,
fields...,
END OF itab.

REPLACED by
TYPES:BEGIN OF line_type,
fields...,
END OF line_type.
DATA itab TYPE TABLE OF line_type INITIAL SIZE n.

Internal tables with header lines are not allowed. Header line in an internal table is a default
line that the system uses when looping through the internal table

Short forms of internal table line operations are not allowed. For example, you cannot use
the syntax INSERT TABLE itab. However, you can use INSERT wa INTO TABLE itab

Transformations do not permit READ itab statement in which the system reads values from
header lines.
For example, the code READ TABLE itab. is now outdated, but you could use the code
READ TABLE itab WITH KEY . . . INTO wa.

Calling external subroutines using the syntax PERFORM FORM(PROG) is not allowed.
In this example, FORM is a subroutine in the program PROG.

REPLY ME IF U LIKE.. :)

Posted by $@ti$h at 8:20 AM 0 comments

MONDAY, JANUARY 24, 2011

Interview Questions
The following are some of Best interview questions for SAP BI Job seekers..
How can I compare data in R/3 with data in a BW Cube after the daily delta
loads? Are there any standard procedures for checking them or matching the
number of records?

Go to R/3 TC RSA3 and run the extractor. It will give you the number of records extracted.
Then go to BW Monitor to check the number of records in the PSA and check to see if it is
the same.
RSA3 is a simple extractor checker program that allows you to rule out extract problems in
R/3. It is simple to use, but only really tells you if the extractor works. Since records that get
updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to
determine what is in the Cube compared to what is in the R/3 environment. You will need to
compare records on a 1:1 basis against records in R/3 transactions for the functional area in
question. I would recommend enlisting the help of the end user community to assist since
they presumably know the data.

To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click executes and you will
see the record count, you can also go to display that data. You are not modifying anything so
what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you
how many records should be expected in BW for a given load. You have that information in
the monitor RSMO during and after data loads. From RSMO for a given load you can
determine how many records were passed through the transfer rules from R/3, how many
targets were updated, and how many records passed through the Update Rules. It also gives
you error messages from the PSA.

How can I copy queries from an infocube to another infocube in Business


Explorer?

By using the Transaction RSZC in our BW system.

What is the feature of Delta queue?

It collects the data from R/3 and store it within the delta queue in R/3 unitll next delta
extract. The DataSource reads the Delta Queue data and push it to BW.

How do we see the data thats collected and stored in the delta queue?

Go to Transaction RSA7 on R/3 to view the contents of the delta queue. Note only extractors
that have been suscessfully initialized will show up in this queue

What is the difference between V1, V2 and V3 updates?


>> V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the
same time as the document update (in the application tables).
>> V2 Update: It is an Asynchronous update. Statistics update and the Document update
take place as different tasks.
V1 & V2 don’t need scheduling.
>> Serialized V3 Update: The V3 collective update must be scheduled as a job (via
LBWE). Here, document data is collected in the order it was created and transferred into the
BW as a batch job. The transfer sequence may not be the same as the order in which the
data was created in all scenarios. V3 update only processes the update data that is
successfully processed with the V2 update.

What is compression?

It is a process used to delete the Request IDs and this saves space

What is Rollup?

This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have
not performed a rollup then the new InfoCube data will not be available while reporting on
the aggregate

Difference between display attributes and navigational attributes?

Display attribute is one, which is used only for display purpose in the report. Where as
navigational attribute is used for drilling down in the report. We don't need to maintain
Navigational attribute in the cube as a characteristic (that is the advantage) to drill down.

Posted by $@ti$h at 5:39 AM 0 comments

Process chain creation


I want to continue my series for beginners new to SAP BI. In this blog I write down the
necessary steps how to create a process chain loading data with an infopackage and with a
DTP, activation and scheduling of this chain.

1.) Call transaction RSPC

RSPC is the central transaction for all your process chain maintenance. Here you find on the
left existing process chains sorted by “application components”. The default mode is
planning view. There are two other views available: Check view and protocol view.
2.) Create a new process chain
To create a new process chain, press “Create” icon in planning view.
In the following pop-Up window you have to enter a technical name and a description of
your new process chain.

The technical name can be as long as up to 20 characters. Usually it starts with a Z or Y. See
your project internal naming conventions for it.
3.) Define a start process
After entering a process chain name and description, a new window pop-ups. You are asked
to define a start variant.

That’s the first step in your process chain! Every process chain does have one and only one
starting step. A new step of type “Start process” will be added. To be able to define unique
start processes for your chain you have to create a start variant. These steps you have to do
for any other of the subsequent steps. First drag a process type on the design window. Then
define a variant for this type and you have to create a process step. The formula is:
Process Type + Process Variant = Process Step!
If you save your chain, process chain name will be saved into table RSPCCHAIN. The
process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a
modified version.So press on the “create” button, a new pop-up appears:

Here you define a technical name for the start variant and a description. In the n ext step
you define when the process chain will start. You can choose from direct scheduling or start
using meta chain or API. With direct scheduling you can define either to start immediately
upon activating and scheduling or to a defined point in time like you know it from the job
scheduling in any SAP system. With “start using meta chain or API” you are able to start this
chain as a subchain or from an external application via a function module
“RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or
create a new one and you have successfully created the first step of your chain.
4.) Add a loading step
If you have defined the starting point for your chain you can add now a loading step for
loading master data or transaction data. For all of this data choose “Execute infopackage”
from all available process types. See picture below:

You can easily move this step with drag & drop from the left on the right side into your
design window.A new pop-up window appears. Here you can choose which infopackage you
want to use. You can’t create a new one here. Press F4 help and a new window will pop-up
with all available infoapckages sorted by use. At the top are infopackages used in this
process chain, followed by all other available infopackages not used in the process chain.
Choose one and confirm. This step will now be added to your process chain. Your chain
should look now like this:

How do you connect these both steps? One way is with right mouse click on the first step
and choose Connect with -> Load Data and then the infopackage you want to be the
successor.

Another possibility is to select the starting point and keep left mouse button pressed. Then
move mouse down to your target step. An arrow should follow your movement. Stop
pressing the mouse button and a new connection is created. From the Start process to every
second step it’s a black line.
5.) Add a DTP process
In BI 7.0 systems you can also add a DTP to your chain. From the process type window ( see
above.) you can choose “Data Transfer Process”. Drag & Drop it on the design window. You
will be asked for a variant for this step. Again as in infopackages press F4 help and choose
from the list of available DTPs the one you want to execute. Confirm your choice and a new
step for the DTP is added to your chain. Now you have to connect this step again with one of
its possible predecessors. As described above choose context menu and connect with ->
Data transfer process. But now a new pop-up window appears.

Here you can choose if this successor step shall be executed only if the predecessor was
successful, ended with errors or anyhow if successful or not always execute. With this
connection type you can control the behaviour of your chain in case of errors.
If a step ends successful or with errors is defined in the process step itself. To see the
settings for each step you can go to Settings -> Maintain Process Types in the menu. In this
window you see all defined (standard and custom ) process types. Choose Data transfer
process and display details in the menu. In the new window you can see:

DTP can have the possible event “Process ends “successful” or “incorrect”, has ID @VK@,
which actually means the icon and appears under category 10, which is “Load process and
post-processing”. Your process chain can now look like this:
You can now add all other steps necessary. By default the process chain itself suggests
successors and predecessors for each step. For loading transaction data with an infopackage
it usually adds steps for deleting and creating indexes on a cube. You can switch off this
behaviour in the menu under “Settings -> Default Chains". In the pop-up choose “Do not
suggest Process” and confirm.

Then you have to add all necessary steps yourself.


6.) Check chain
Now you can check your chain with menu “Goto -> Checking View” or press the button
“Check”. Your chain will now be checked if all steps are connected, have at least one
predecessor. Logical errors are not detected. That’s your responsibility. If the chain checking
returns with warnings or is ok you can activate it. If check carries out errors you have to
remove the errors first.

7.) Activate chain


After successful checking you can activate your process chain. In this step the entries in
table RSPCPROCCESSCHAIN will be converted into an active version. You can activate your
chain with menu “Process chain -> Activate” or press on the activation button in the symbol
bar. You will find your new chain under application component "Not assigned". To assign it
to another application component you have to change it. Choose "application component"
button in change mode of the chain, save and reactivate it. Then refresh the application
component hierarchy. Your process chain will now appear under new application
component.

8.) Schedule chain


After successful activation you can now schedule your chain. Press button “Schedule” or
menu “Execution -> schedule”. The chain will be scheduled as background job. You can see
it in SM37. You will find a job named “BI_PROCESS_TRIGGER”. Unfortunately every
process chain is scheduled with a job with this name. In the job variant you will find which
process chain will be executed. During execution the steps defined in
RSPCPROCESSCHAIN will be executed one after each other. The execution of the next
event is triggered by events defined in the table. You can watch SM37 for new executed jobs
starting with “BI_” or look at the protocol view of the chain.

9.) Check protocol for errors


You can check chain execution for errors in the protocol or process chain log. Choose in the
menu “Go to -> Log View”. You will be asked for the time interval for which you want to
check chain execution. Possible options are today, yesterday and today, one week ago, this
month and last month or free date. For us option “today” is sufficient.
Here is an example of another chain that ended incorrect:

On the left side you see when the chain was executed and how it ended. On the right side
you see for every step if it ended successfully or not. As you can see the two first steps were
successfull and step “Load Data” of an infopackage failed. You can now check the reason
with context menu “display messages” or “Process monitor”. “Display messages” displays
the job log of the background job and messages created by the request monitor. With
“Process monitor” you get to the request monitor and see detailed information why the
loading failed. THe logs are stored in tables RSPCLOGCHAIN and RSPCPROCESSLOG.

Posted by $@ti$h at 5:26 AM 0 comments

Generic Extraction
Log on sap R/3.
Step 1; create a table or view for generic extraction in se11.
Step 2; then goto RSO2 t-code.
Step 3; Here u have to decide whether to extract transaction data or master data attributes
or texts.
Step 4; suppose if u have opted for transaction data,then give name in the column
ex:ztd_m(data source name)
Step 5; select create button,this will take u to another screen.
Step 6; Here u have to decide from which application component u r extacting data.Ex:
SD,MM,..
Step 7; in this screen u have to fill short disciption,medium,long( these r mandatory).
Step 8; then U have to Table name or view name which u have created in se11.
Step 9; I f u want to maintain generic delta then u can select generic delta in top left hand
side corner.
Step 10; in next screen u have to give some field which is primary key.
step 11; here u to specify whether time stamp or cal day or numeric pointer depending on u r
requirement.
step 12; then u have to specify whether new status for changed records or additive delta.
If u choose additive delta ,then u have to load data to infocube or ods object.
If u choose new status for changed records, then u to load data to ods object only.
Step 13; then save it.
Step 14; then logon to sap bw ,then replicate the data source then as usuall.

Posted by $@ti$h at 5:23 AM 0 comments

How to Append structure


Go to TC : RSA6
Select u r master data data source go for change
Double click on extract structure

click on append structure tab at menu bar

enter the field what ever u want append

activate come back

to rsa6
change the data source and uncheck the new field in hide

and save it
next for CMOD

and give the project name


select the enhancement assignment radio button

create

enter rsap0001 then click on component

tab at menu bar

double click on exit _saplrsap_002 for master data

then double click on zxrsau02 here u can write the abap code

Posted by $@ti$h at 5:21 AM 0 comments

Delta Differences/ Advantages & Disadvantages


Direct Delta:
=========
With each document posting the extraction data is transferred directly to the BW Delta
Queue.
Here each document posted is converted to one LUW.

Direct Delta Advantages:


We dont need to schedule a job at regular (LBWE – Job Control) in order to transfer the
data to the BW Delta Queue.
Serialization of the document is ensured by using the enqueue concept.
Direct Delta Disadvantages:
Different document changes are not summarized into one LUW in the BW Delta Queue.
Hence the number of LUWs per datasource in the BW delta queue increases significantly.

Direct Delta Recommended For:


Customers with low occurrence of documents(max 10000 doc changes between 2 delta
extracttions). The reason for this is that a large number of LUWs can cause a dump during
the extraction process.

_______________________________________________________________
________________________

Queued Delta:
==========
With queued delta update mode, the extraction data is written in an extraction queue and is
transferred to the BW Delta Queues by an update collective run.
Upto 10000 document delta/change to one LUW are cumulated per Datasource in the BW
Delta Queue.

Queued Delta - Advantages:


Serialization is ensured using the enqueue concept.
Works good when the occurrence of documents is high.

Queued Delta - Disadvantages:


In this case we would need to schedule a job (LBWE-Job Control) to regularly transfer the
data to BW Delta Queue.
Subsequently additional monitoring of the extraction queue is required.

Queued Delta Recommended For:


Customers with high occurrence of documents (more than 10000 document changes
between delta extractions).

_______________________________________________________________
________________________

Unserialized V3 Update :
==================
The extraction data continues to be written to the update tables using a V3 update module
and then is read and processed by a collective update run through LBWE.
Data is read in the update collective run without taking the sequence number into account.

Unserialized V3 Update - Advantages:


Works good when the occurrence of documents is high.

Unserialized V3 Update - Disadvantages:


Serialization is not ensured.
In this case we would need to schedule a job (LBWE-Job Control) to regularly transfer the
data.
Subsequently additional monitoring of the extraction queue is required.

Unserialized V3 Update Recommended For:


Only if its irrelevant whether or not the extraction data is transferred to BW in exactly the
same sequence (serialization) in which the data was generated in R/3. Basically the
functional data flow should not require a correct temporal sequence.

You might also like