You are on page 1of 19

Update Rules Definition Update rules specify how the data (key figures, time characteristics, characteristics) is updated

to data targets from the communication structure of an InfoSource. You are therefore connecting an InfoSource with a data target.

Use An update rule must be specified for each key figure and the corresponding characteristics of the InfoCube. For an ODS Object, it must be specified for the data and key fields, and for an InfoObject it must be specified for the attribute and key fields. The following update types exist: 1. 2. 3. No update Addition, minimum or maximum Overwriting (with ODS objects and InfoObjects only)

A data target can be supplied with data from several InfoSources. A record of update rules must be maintained for each of these InfoSources, describing how the data is written from the communication structure belonging to the InfoSource into the data target. The InfoObjects of the communication structure belonging to the InfoSource are described in the update as source InfoObjects (meaning source key figure or source characteristic). The InfoObjects for the InfoCube, on the other hand, are described as target InfoObjects (meaning target key figure or target characteristic). Similarly, the InfoObjects of an ODS object/InfoObject are called target InfoObjects (therefore, target data field or target key field). Structure There is an update rule for each key figure of the InfoSource. This comprises the rule for the key figure itself and the current rules for characteristics, time characteristics, and units assigned to it.

Creating Update Rules: Key Figure Calculation


Requirements
Select Administrator Workbench InfoCubes. 2. Select the required InfoCube and choose Create update rules or select the desired update rules and choose Change update rules. 3. Enter the name of the InfoSource, whose update rules you wish to maintain. Select Next screen.

1.

If the corresponding update rules have already been maintained, these will be displayed.

If you create new update rules, the system makes suggestions here. 4. Highlight a key figure and select the symbol for the detail display or call up the detail screen by double-clicking on a key figure.

Procedure
1. Determine the update type for the displayed key figure. With the update type, you control whether a key figure is updated in the InfoCube.

a.

Depending on the aggregation type you entered in the Key Figure Maintenance for this key figure, you are given the option Addition or Maximum or Minimum. If you choose one of these options, new values are updated in the InfoCube. The aggregation type (addition, minimum & maximum) determines how key figures are updated where primary keys are the same. Thus either the total, the minimum or the maximum for these values is formed for new values. If you entered one of these aggregation types in the key figure definition, it is transferred for the update. Otherwise, the aggregation type addition is automatically selected.

b. Release 2.0B only: If you update the data in an ODS table then you have the option of overwriting data.

In this example the status and value of the orders change after they have been loaded into BW. With the second load process the data is overwritten since it has the same primary key.

Date

Order number 10 11 10

Status

Value

12.00 12.00 11.00 First Load Process

x x

100 200 300

Second Load Process Date Order number 10 11 Status Value

12.00 12.00

110 250

11.00 c.

10

300

If you choose no update, the key figures are not updated in the InfoCube, meaning that no data records are written in the InfoCube with the first data transfer, or that data records that already exist, will remain in place with subsequent transfers.

2. Select an update method (Source key figure, Routine). a. Source Key Figure If the system does not make any suggestions for a source key figure, you can assign a source key figure of the same type (amount, number, integer, quantity, float, time) or create a Routine. If you assign a source key figure of the same type, that has a different currency to the target key figure, then you must translate the source currency using a Currency Translation in the Target Currency. b. Routines Select routines if you want to fill a target key figure from an update routine. Update rules generally only have one return value. If you select Return table, the corresponding key figure routine then no longer has a return value, but a return table. You can then Generate as many key figures as you like from one data record. If you fill the target key figure from an update routine for update rules, the currency translation has to be carried out using the update routine. If you choose a routine you can then also choose the indicator routine with unit calculation. In the routine you then also get the return parameter UNIT. In this respect you can, for example, store the required unit of the key figure, such as DEM or ST. You can use this option, for example, to calculate the unit KG present in the communication structure in tons in the InfoCube

Result
You have determined the rule for the key figure. Switch to Creating Update Rules: Characteristic Calculation

Creating Update Rules: Characteristic Calculation


Requirements
You have already determined the rule for the corresponding key figure.

Procedure

1. Determine an update method for each characteristic. With the calculation type, you control whether and how a characteristic is updated in the InfoCube. There are several options open to you: Source characteristic: The characteristic is filled directly from the chosen characteristic of the communication structure. Constant: The characteristic is not filled using the communication structure but directly with the value entered. Master Data Attribute of: The characteristic is filled by Reading the Master Data on Demand of another characteristic of the communication structure. Routine: The characteristic is filled by an Update Routine that you have written. The system provides you with a selection option which lets you decide whether the routine should be valid for all of the key figures belonging to this characteristic or only for the key figures displayed. If you generate different rules for different key figures for the same characteristic, a separate data record can be created from a data record of the InfoSource for each key figure. If the rule is to be valid for all key figures, then you do not need to further maintain the rules for the characteristics with the remaining key figures of this InfoCube. Initial value: The characteristic is not filled. It remains empty. If you have fixed the rules for all characteristics switch to time reference

2.

Creating Update Rules: Time Reference


Requirements
You have already determined the rules for the corresponding key figure and the characteristics that belong to it.

Procedure
1. Choose the tabstrip Time Reference. 2. Determine an update method for each characteristic. With the calculation type, you control whether and how a characteristic is updated in the InfoCube. There are several options open to you: Source Characteristic: If you wish to select a source time characteristic, the system offers you the automatic time conversion entry option. Only those source time characteristics that are equal or exist for an automatic time conversion are displayed here. Constant: The characteristic is not filled using the communication structure but directly with the value entered. Master Data Attribute of: The characteristic is filled by Reading the Master Data on Demand of another characteristic of the communication structure. Routine: The characteristic is filled by an Update Routine that you have written.

The system offers you a selection option which lets you decide whether the routine should be valid for all of the key figures that belong to this time characteristic or if it should only be valid for the displayed key figure. If you generate different rules for different key figures for the same characteristic, you can create a separate data record for each key figure from a data record of the InfoSource. If the rule is to be valid for all key figures, then you do not need to further maintain the rules for the time characteristics with the remaining key figures of this InfoCube. Initial value: The characteristic is not filled. It remains empty.

3. If you have determined the rules for all of the time characteristics, choose Transfer or switch to the next key figure.

Download SAP Crystal Dashboard Design now! Download SAP Crystal Reports today!

Creating Update Routines


Requirements
Select Administrator Workbench InfoCubes. 2. Select the required InfoCube and then choose Create update rules or Change update rules. 3. Enter the name of the InfoSource, whose update rules you wish to maintain. Select Next screen. 4. Highlight a key figure and select the symbol for the detail display or call up the detail screen by double-clicking on a key figure. 5. Choose the InfoObject (characteristic, time-characteristic, key figure), for which you want to create an ABAP routine.

1.

Procedure
1. Choose Routine 2. Choose Create routine The ABAP editor appears with the following source code. 1. 2. 3. 4. *$*$ begin of global - insert your declaration only below this line *-* * TABLES:... PROGRAM UPDATE_ROUTINE.

5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31.

* DATA: ... *$*$ end of global - insert your declaration only before this line *-* * FORM compute_characteristics TABLES monitor structure rsmonitor "user defined monitoring USING COMM_STRUCTURE LIKE /BIC/CS <infosource> RECORD_NO LIKE SY-TABIX RECORD_ALL LIKE SY-TABIX SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS CHANGING RESULT LIKE /BIC/V <infocube> T- <infoobject> RETURNCODE LIKE SY-SUBRC. * *$*$ begin of routine - insert your code only below this line *-* * fill the internal table "MONITOR" to make monitor entries

* result value of the routine RESULT =.

* if the returncode is not equal zero, the result will not be updated RETURNCODE = 0.

* if abort is not equal zero, the update process will be canceled

ABORT = 0. *$*$ end of routine - insert your code only before this line *-* * ENDFORM.

The blue italics in the text, marked <>, are automatically replaced with the relevant object name.

3.

Insert your code under *$*$ begin of routine - insert your code only below this line *-* (row 17). In the field Result = (21), you can set the result of your update routine. o o o By choosing COMM_STRUCTURE-Field_name, you can access the objects of your communication structure. (For example, COMM_STRUCTURE-calday+1 ) With MONITOR-Field_name , you can reserve the fields of the monitor table for userdefined messages. With the ABORT parameters, you can determine whether the loading process should be canceled when a certain event occurs. In the following example, the loading process is canceled if the value for the InfoObject AMOUNT is less than 0. In this case, the message 100 of type ERROR (E) with the message class ZJH appears. Messages of type ERROR also appear in the monitor when the status is displayed for the IDoc.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

* if abort is not equal zero, the update process will be canceled if comm_structure-amount < 0. monitor-msgid = ZJH. monitor-msgty = E. monitor-msgno = 100. monitor-msgv1 = comm_structure-amount. append monitor. abort = 1. exit. else. abort = 0. endif. * result value of the routine result = comm_structure-amount.

You also have the option of entering global declarations in rows 4 and 5 (tables or data), which can then be used for all further routines that you create.

What do you mean by Queued Delta Update Method?


"Queued delta" update method: With queued delta update mode, the extraction data (for the relevant application) is written in an extraction queue (instead of in the update data as in V3) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update. After activating this method, up to 10000 document delta/changes to one LUW are cumulated per datasource in the BW delta queues. If you use this method, it will be necessary to schedule a job to regularly transfer the data to the BW delta queues (by means of so-called "update collective run") by using the same delivered reports as before (RMBWV3); instead, report RSM13005 will not be provided any more since it only processes V3 update entries. As always, the simplest way to perform scheduling is via the "Job control" function in LBWE. SAP recommends to schedule this job hourly during normal operation after successful delta initialization, but there is no fixed rule: it depends from peculiarity of every specific situation (business volume, reporting needs and so on). BENEFITS, BUT When you need to perform a delta initialization in the OLTP, thanks to the logic of this method, the document postings (relevant for the involved application) can be opened again as soon as the execution of the recompilation run (or runs, if several and running in parallel) ends, that is when setup tables are filled, and a delta init request is posted in BW, because the system is able to collect new document data during the delta init uploading too (with a deeply felt recommendation: remember to avoid update collective run before all delta init requests have been successfully updated in your BW!). By writing in the extraction queue within the V1 update process (that is more burdened than by using V3), the serialization is ensured by using the enqueue concept, but collective run clearly performs better than the serialized V3 and especially slowing-down due to documents posted in multiple languages does not apply in this method. On the contrary of direct delta, this process is especially recommended for customers with a high occurrence of documents (more than 10,000 document changes - creation, change or deletion performed each day for the application in question. In contrast to the V3 collective run (see OSS Note 409239 Automatically trigger BW loads upon end of V3 updates in which this scenario is described), an event handling is possible here, because a definite end for the collective run is identifiable: in fact, when the collective run for an application ends, an event (&MCEX_nn, where nn is the number of the application) is automatically triggered and, thus, it can be used to start a subsequent job.

Besides, dont omit that queued delta process extraction is independent of success of the V2 update. REMEMBER TO EMPTY THE QUEUE The queued delta is a good friend, but some care is required to avoid any trouble. First of all, if you want to take a look to the data of all extract structures queues in Logistic Cockpit, use transaction LBWQ or "Log queue overview" function in LBWE (but here you can see only the queues currently containing extraction data). In the posting-free phase before a new init run in OLTP, you should always execute (as with the old V3) the update collective run once to make sure to empty the extraction queue from any old delta records (especially if you are already using the extractor) that, otherwise, can cause serious inconsistencies in your data. Then, if you want to do some change (through LBWE or RSA6) to the extract structures of an application (for which you selected this update method), you have to be absolutely sure that no data is in the extraction queue before executing these changes in the affected systems (and especially before importing these changes in production environment !). To perform a check when the V3 update is already in use, you can run in the target system the RMCSBWCC check report. The extraction queues should never contain any data immediately before to:

perform an R/3 or a plug-in upgrade import an R/3 or a plug-in support packages Serialized V3 Update(If you set this as your delta mode then data will populated into extraction queue tables)

Mar

Types of Update Methods - LO Extraction


LO Extraction - Update queue - Update methods -------------------------------------------------1. Direct Delta 2. Queed Delta 3. Un Serialized Delta Update 1. Direct delta: When a Document is posted it first saved to the application table and also directly saved to the RSA7 (delta queue) from here it is being moved to BW.So you can understand that for Delta flow in R/3 Delta queue is the exit point.

2. Queued Delta: When a document is posted it is saved to application table, and also saved to the Extraction Queue ( here is the different to direct delta) and you have to schedule a V3 job to move the data to the delta queue periodically and from their it is moved to BW. 3. Unserialized V3 Update: This method is largely identical to the serialized V3 update. The difference lies in the fact that the sequence of document data in the BW delta queue does not have to agree with the posting sequence. It is recommended only when the sequence that data is transferred into BW does not matter (due to the design of the data targets in BW). You can use it for Inventory Management, because once a Material Document is created, it is not edited. The sequence of records matters when a document can be edited multiple times. But again, if you are using an ODS in your inventory design, you should switch to the serialized V3 update.

4. Loading Hierarchies
5. Prerequisites n the InfoObject maintenance, the indicator with hierarchies is set for the characteristic to show that the characteristic is allowed to have hierarchies. A source system must be assigned to the InfoSource for master data.

If you want to load hierarchies from external systems, you must also maintain the Metadata. Procedure 1.Select the InfoSource tree from the Administrator Workbench.

2.Select the master data InfoSource for the InfoObject, to which you want to load the hierarchy. 3.Choose Assign DataSource from the context menu, and select the source system, from which you want to load the hierarchy. 4.Choose the function Create InfoPackage from the context menu .Give the InfoPackage a description. 5.Choose the pushbutton Hierarchies from the possible data types. You get a list of all the hierarchies that are available in the source system for this InfoObject. 6.On the tabstrip Hierarchy Selection, select the hierarchy that you want to load into the Business Information Warehouse. 7.It is possible to create a hierarchy as a sub-tree, provided there is already a hierarchy under the specified technical key, and this hierarchy contains the root nodes of the hierarchy that you want to load. Note : Use the other tabstrips to determine the update parameters, and schedule the InfoPackage.

Result The hierarchy structure and the node texts/intervals are loaded. The structure information and the hierarchy texts are stored in the Business Information Warehouse. You are now able to edit the hierarchy. You must activate it so that it can be used in reporting

NOTE: When you upload hierarchies, the system carries out a consistency check, making sure that the hierarchy structure is correct. Error messages are logged in the Monitor.

6. Creating Hierarchies
7. Prerequisites In the InfoObject maintenance, the indicator with hierarchies is set for the relevant characteristic, meaning that the characteristic can have hierarchies.

Procedure Step 1: Select the InfoObject tree in the Administrator Workbench. If you have assigned the characteristic to an InfoObject catalog, select the corresponding InfoObject catalog for an InfoArea. If the characteristic does not belong to an InfoObject catalog, select the Not Assigned Nodes InfoArea and the Not Assigned Characteristics InfoObject catalog. Step 2:Select the characteristic for which you want to create a hierarchy, choose the context menu with the right mouse button, and then Create hierarchy. Step 3:In the dialog box, enter the technical hierarchy name, or the hierarchy version if applicable, the time-reference, and at least one short description of the hierarchy. Confirm your entries. You come to the screen Maintain hierarchies. You can now define your hierarchy here. Creating Nodes and Leaves Step 4:Select the root and choose Edit Create nodes. Give the node a technical name and at least one short description. Select a node that does not represent a value interval, and create a child node for it, or choose Edit Insert . You now see a list of the Characteristic values and can select one single value or various single values. These InfoObjects deal with chargeable nodes. These are symbolized by the green InfoObject icons

If you want to include external InfoObjects in the hierarchy, you can do this using right mouse button Insert nodes for characteristic, or Characteristic in the menu toolbar. These InfoObjects deal with non-chargeable nodes. These are symbolized by the green InfoObject icons. If the hierarchy is allowed to involve intervals, then go to Right mouse button Create intervals, or Interval in the menu toolbar. Specify (for example, with input help), the value interval.

You

can

create

intervals

anywhere,

even

under

chargeable

nodes.

A hierarchy node, which represents a value interval, is an end node and cannot have child nodes.

Step 5:Repeat these steps until you have created all the nodes and the leaves. To display, change, or delete nodes or leaves from the hierarchy, place the cursor on the node/leaf to be deleted, and choose Edit Change, display, delete nodes.

Saving

and

activating

hierarchies hierarchy.

Save the With the right mouse button, choose Activate hierarchy

Infocubes have a multidimensional structure with dimension tables(max 16, 13 custom) and one
fact table. they are meant for summarised records. ODS store data at a more granular level. they have flat structures like a table in R/3. They have a unique feature "overwrite" which is absent in case of cubes. You can use ODS to load to cube further. Anyway, one major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat we mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-DIMENSIONAL Reporting Another difference is : In ODS, you can update an existing record given the KEY. In CUBES, theres no such thing. It will accept duplicate records and during reporting, SUM the keyfigures up. Theres no EDIT previous record contents just ADD. With ODS, the procedure is UPDATE IF EXISTING (base from the Table Key) otherwise ADD RECORD. ODS Stores line item level detail, more granular Can't create aggregates on ODS ODS are based on flat tables Only two dimensional reporting possible on ODS. Overwrite feature available while loading records Infocube

Difference Between Info Cube and ODS

Stores summarized data, less granular. - Aggregates can be created on top of Infocubes for better performance of Queries. Multi-dimensional reporting possible on Infocube. Theres no overwrite feature while loading records. Infocubes are MDM objects that fact table and dimension table are available whereas ODS is not a MDM object there are no fact tables and dimension tables. It consists of flat transparent tables. In infocubes there are characteristics and keyfigures but in ods key fields and data fields. we can keep non key characteristics in data fields. Some times we need detailed reports we can get through ODS. ODS are used to store data in a granular form i.e level of detail is more. The data in the infocube is in aggregated form. From reporting point of view ods is used for operational reporting where as infocubes for multidimensional reporting. ODS are used to merge data from one or more infosources but infocubes does not have that facility. The default update type for an ODS object is overwrite for infocube it is addition. ODS are used to implement delta in BW. Data is loaded into the ODS object as new records or updating existing records in change log or overwrite existing records in active data table using 0record mode. You cannot load data using Idoc transfer method in ODS but u can do in infocube. You cannot create aggregate on ODS. You cannot create infosets on infocube. ODS objects can be used. When you want to use the facility of overwrite. If you want to overwrite nonkey characteristics and key figures. If you want detailed reports you can use ODS. If you want to merge data from two or more infosources you can use ODS. It allows you to drill down from infocube to ODS through RRI interface. ODS objects can be used in the following scenarios. ODS is not a mandatory but depending on the requirements we have to use it. When you want to use the facility of overwrite. If you want to overwrite nonkey characteristics and key figures in the data fields column. If you want detailed reports, you can use ODS. If you want to merge data from two or more infosources you can use ODS. It allows you to drill down from infocube to ODS through RRI interface if u want detailed data from ODS. If you want to create an external file. The most important difference between ODS and BW is the existence of key fields in the ODS. In the ODS you can have up to 16 info objects as key fields. Any other info objects will either be added or overwritten! So if you have flat files and want to be able to upload them multiple times you should not load them directly into the info cube, otherwise you need to delete the old request before uploading a new one. There is the disadvantage that if you delete rows in the flat file the rows are not deleted in the ODS. I also use ODS-Objects to upload control data for update or transfer routines. You can simply do a select on the ODS-Table /BIC/A00 to get the data. ODS is used as an intermediate storage area of operational data for the data ware house ODS contains high granular data . ODS are based on flat tables, resulting in simple modeling of ODS . We can cleanse transform merge sort data to build staging tables that can later be used to populate INOFCUBE . An infocube is a multidimentionsl dat acontainer used as a basis for analysis and reporting processing. The infocube is a fact table and their associated dimension tables in a star schema. It looks like a fact table appears in the middle of the graphic, along with several surrounding dimension tables. The central fact is usually very large, measured in gigabytes. it is the table from which you retrieve the interesting data. the size of the dimension tables amounts to only 1 to 5 percent of hte size of the fact table. Common dimensions are unit & time etc. There are different type of infocubes in BW, such as basic infocubes, remote infocubes etc.

An ODS is a flat data container used for reporting and data cleansing/quality assurance purpose.
They are not based on star schema and are used primaily for detail reporting rather than for dimensional analyais. An infocube has a fact table, which contains his facts (key figures) and a relation to dimension tables. This means that an infocube exists of more than one table. These tables all relate to each other. This is also called the star scheme, because the dimension tables all relate to the fact table, which is the central point. A dimension is for example the customer dimension, which contains all data that is important for the customer. An ODS is a flat structure. It is just one table that contains all data. Most of the time you use an ODS for line item data. Then you aggregate this data to an infocube. ODS holds transactional level data..Its just as a flat table. Its not based on multidimensional model. ODS have three tables 1. Active table 2. change log 3. New table Cube holds aggregated data which is not as detailed as ODS. Cube is based on multidimensional model. Cube have 2 tables 1. E table 2. F table.

Responsibilities of BW Data Architect


Responsibilities of an implementation project... For ex, Lets say If its a fresh implementation of BI or for that matter you are implementing SAP... First and foremost will be your requirements gathering from the client. Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation... After the blue print phase sign off we start off with the realization phase where the actual development happens... In our example after installing the necessary softwares, patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc., basically its a question and answer session with the business users... After collecting those informations the development happens in the development servers... After the development comes to an end the same objects are tested in quality servers for any bugs, errors etc., When all the tests are done we move all the objects to the production environment and test it again whether everything works fine... The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions... The responsibilites vary depending on the requirement... Initially the business analyst will interact with the end users/managers etc., then on the requirements the software consultants do the development, testers do the testing and finally the go-live happens... BW Data Architect Description : The BW Data Architect is responsible for the overall data design of the BW project. This includes the design of the - BW InfoCubes (Basic Cubes, Multi-cubes, Remote cubes, and Aggregates) BW ODS Objects BW Datamarts Logical Models BW Process Models BW Enterprise Models

The BW Data Architect plays a critical role in the BW project and is the link between the end user's business requirements and the data architecture solution that will satisfy these requirements. All other activities in the BW project are contingent upon the data design being sound and flexible enough to satisfy evolving business requirements. Time Commitment the time which must be committed to this Role to ensure the project requirements are met. Project Complexity Time Commitment: Low - If the BW project utilizes standard BW content and InfoCubes, this role can be satisfied by the BW Application Consultant. Medium If the BW project requires enhancements to the standard BW content and InfoCubes and/or requires the integration of non-SAP data, this role may require a committed resource. High - If the BW project requires significant modification and enhancement to standard BW content and InfoCubes, it is highly recommended that an experienced resource be committed full-time to the project. Key Attributes The BW Data Architect must have: An understanding of the BW data architecture An understanding of multidimensional modeling - An understanding of the differences between operational systems data modeling and data warehouse data modeling An understanding of the end user's data - An understanding of the integration points of the data (e.g., customer number, invoice number) Excellent troubleshooting and analytical skills Excellent communication skills Technical competency in data modeling Multi-language skills, if an international implementation Working knowledge of the BW and R/3 application(s) - Experience with Data Modeling application software (i.e., ERWIN, Oracle Designer, S-Designer, etc.) Key Tasks - The BW Data Architect is responsible for capturing the business requirements for the BW project. This effort includes: Planning the business requirements gathering sessions and process - Coordinating all business requirements gathering efforts with the BW Project Manager Facilitating the business requirements gathering sessions - Capturing the information and producing the deliverables from the business requirements gathering sessions Understanding and documenting business definitions of data Developing the data model Ensuring integration of data from both SAP and non-SAP sources Fielding questions concerning the data content, definition and structure This role should also address other critical data sign issues such as: Granularity of data and the potential for multiple levels of granularity Use of degenerate dimensions InfoCube partitioning Need for aggregation at multiple levels Need for storing derived BW data Ensuring overall integrity of all BW Models - Providing Data Administration development standards for business requirements analysis and BW enterprise modeling

Provide Impact

strategic analysis

planning of

for data

data change

management requirements

As stated above, the BW Data Architect is responsible for the overall data design of the BW project. This includes the design of the: - BW InfoCubes (Basic Cubes, Multi-cubes, Remote cubes, and Aggregates) BW ODS Objects BW Datamarts Logical Models BW Process Models - BW Enterprise Models

MULTIPROVIDER
A Multiprovider is a type of Infoprovider that contains data from a number of Infoproviders and makes it available for Reporting Purposes. The Multiprovider doesn't contains any data. Its data comes entirely from the info providers on which it is based. These infoproviders are connected to one another by a Union operation. Infoproviders and Multiproviders are the objects or views that are relevant for reporting. A Multiprovider allows you to run reports using several infoproviders that is, it is used for creating reports on more than one infoprovider at a time. In BI 7 Version Multiproviders can be created using Infocube(Both Physical/Virtual) ,DSO,Infoobject,Infoset,Aggregation Level. In a Multiprovider, every characteristic in each of the infoprovider involved must corresponds to exactly or navigation attribute( where these are available) If it is not clear, at the Multiprovider definition stage, you have to specify to which infoobject you want to assign the characteristic of the Multiproviders.

Types ------------------------------1. Homogenious Multiproviders:

of

Multiproviders:

These consist of technically identical infoproviders, such as infocubes with exactly the same characteristics and keyfigures, where one infocube contains data for 2001. for example and second infocube contains data for 2002. Homogeneous Multiproviders can be used to partition on the modeling level of the info providers. 2. Heterogenious Multiproviders : These are made up infoproviders that only have a certain number of characteristics and keyfigures is common heterogenious multiproviders can be used to simply the modeling of scenarious by dividing them into subscenarious. Each subscenario is represented by its own infoprovider. Multiproviders with Non Cumulative Keyfigures: We should not use more than one non cumulative key figure with atleast one non cumulative keyfigure because this could lead to incorrect query result.

Introduction to SAP Hierarchies

Definition A hierarchy is a method of displaying a characteristic structured and groupedaccording to individual evaluation criteria.A BW hierarchy has the following properties: Hierarchies are created for basic characteristics (characteristics containing master data).The characteristic 0COSTCENTER is an example. Hierarchies are stored in master data tables. They are also similar to master data, and can therefore be used and modified in all InfoCubes. You A can hierarchy define can several have hierarchies a for maximum a single of characteristic. 98 levels.

You can load hierarchies from the R/3 system, or from a flat file. You can also create andchange hierarchies manually in the BW system. Use: Hierarchies are used in two main ways:

1. Firstly, the structured display of characteristic values (tree display) in a presentation hierarchy. 2.Secondly, selecting a defined quantity of characteristic values as a selection of hierarchy nodes. Structure A hierarchy consists of nodes. A node can be assigned to a higher level node.There is exactly one top node (root). All nodes on the same level of the hierarchy(nodes that are the same distance away from the root) form a hierarchy level. Acharacteristic hierarchy consists of nodes that can and nodes that cannot be posted to . Hierarchies can be created only for those characteristics that do not reference other characteristics.

The characteristic "Cost Element" can be structured according to cost elementgroups. The highest hierarchy level consists, for example, of personnel costs,material costs, administration costs, and so on. Personnel costs are divided up,for example, into the cost element groups pay, salaries, and personnel overhead costs. The cost element group pay contains the cost elements individual paycosts, pay overhead costs and other pay costs

Another typical example of a characteristic hierarchy is the grouping of the characteristic Region into districts that are themselves sub-divided into areas

Hierarchies

Load

characteristic

hierarchies

from

source

system

The source system can be an R/3 OLTP system or an external system (BAPI, file). If you want to load a hierarchy from an external system, you have to first maintain the Metadata for this hierarchy. You can load a hierarchy directly from an R/3 system

1.loading 2.create characteristic hierarchies in the Business Information

Hierarchies Warehouse

If you are working with aggregates and are loading or creating new hierarchies,or changing existing hierarchies, you have to reconstruct the aggregates that areaffected by the changes afterwards Prerequisites You have to determine in the InfoObject-Maintenance whether or not you want the characteristics to have hierarchies. The properties of the hierarchy (for example,the hierarchy versions,time-dependent hierarchy structure) are also defined here.

Different Types of Delta Updates


What is the different delta updates and when to use which updates? Delta loads will bring any new or changed records after the last upload. This method is used for better loading in less time. Most of the std SAP datasources come as delta enabled, but some are not. In this case you can do a full load to the ODS and then do a delta from the ODS to the cube. If you create generic datasources, then you have the option of creating a delta on Calday, timestamp or numeric pointer fields (this can be doc number, etc). You'll be able to see the delta changes coming in the delta queue through RSA7 on the R3 side. To do a delta, you first have to initialize the delta on the BW side and then set up the delta. The delta mechanism is the same for both Master data and Transaction data loads. There are three deltas: Direct Delta: With this update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.

Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 delta extractions of documents for an LUW are compressed for each DataSource into the BW delta queue, depending on the application. Non-serialized V3 Update:With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.

You might also like