You are on page 1of 10

BIW_Q&A ========================================================================= 1) I do not understand the term 'flexible master data',looked into documentation in SAPNet but still confused.

Sol: It means that you a Master Data InfoSource can be loaded into an ODS prior to the Master Data Tables. I look at it as being able to update the master data using an update rule instead of the direct way. If I understand the question it is really the InfoSource that is 'flexible'. Prior to version 3.0 an infosource was either suitable for transaction data OR master data. In version 3.0 it is possible to load master data and transaction data from the same source -- there are probably not many cases where you would want to; document number is the main example that springs to mind. In RSA1 -> InfoProvider there is a right mouse option under InfoArea to 'Insert characteristic as a data target'. Once you have done this you can define update rules just like a standard cube or ODS. 2) What are major benefits of reporting BW over R/3? Would it be sufficient just to Web-enable R/3 Reports? Sol: There are quite a few companies that share your thought but R/3 was designed as an OLTP system and not an analytical and reporting system. In fact, depending on your needs you can even get away with a reporting instance (quite easy with Sun or EMC storage). Yes you can run as many reports as you need from R/3 and web enable them but consider these factors: 1. Performance -- Heavy reporting along with regular OLTP transactions can produce a lot of load both on the R/3 and the database (cpu, memory, disks, etc). Just take a look at the load put on your system during a month end, quarter end, or year end -- now imagine that occurring even more frequently. 2. Data analysis -- BW uses a Data Warehouse and OLAP concepts for storing and analyzing data, where R/3 was designed for transaction processing. With a lot of work you can get the same analysis out of R/3 but most likely would be easier from a BW. Major benefits of BW include: 1. By offloading ad-hoc and long running queries from production R/3 system to BW system, overall system performance should improve on R/3. 2. Another key performance benefit with BW is database design. It is designed specifically for query processing, not data updating and OLTP. Within BW, data structures are designed differently and are much better suited for reporting than R/3 data structures. Example, BW utilizes star schema design which includes fact and dimension tables with bit-mapped indexes. Other important factors include the built-in support for aggregates, database partitioning, more efficient ABAP code by utilizing TRFC processing versus IDOC. 3. Better front-end reporting within BW. Although the BW excel front-end has it's problems, it provides more flexibility and analysis capability than the R/3 reporting screens. 4. BW has ability to pull data from other SAP or non-SAP sources into a consolidated cube. In summary, BW provides much better performance and stronger data analysis capabilities than R/3. 3) Comparing Data Between R/3 and BW How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of records? Sol: Go to R/3 TC RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same. RSA3 is a simple extractor checker program that allows you to rule out extract problems in R/3. It is simple to use, but only really tells you if the extractor works. Since records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would recommend enlisting the help of the end user community to assist since they presumably know the data. To use RSA3, go to it and enter the extractor Ex: 2LIS_02_HDR. Click execute and you will see the record count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3, how many targets were updated, and how many records passed through the Update Rules. It also gives you error messages from the PSA. 4)How can I copy queries from an infocube to another infocube in Business Explorer? Sol: Use Transaction RSZC in your BW system.

5) Deletion from InfoCube Is there a way to delete it? Do I also have to delete the information from the Master and Transactional data? Sol: You can delete the request in the infoCube Administration (transactional data). Master data is deleted in the InfoObjects Area of the Admin.Workbench. Only Master data can be deleted if no transactional data link to the entry in the master data table. Usually there's a system message informing you that not all master data can be deleted (if there is transactional data in any cube referring to the entries in the master data table) but that some entries may be deleted. 6) Installing BW Can anybody tell me what I need to install SAP BW? Sol: If you have installed R/3 before, BW installation shouldnt be problem. The installation process is same. After installation, you have to create RFC connection between the R/3 and BW system, and then create Source system in BW to extract data from R/3.Also, you have have the latest plug-in applied to the R/3 system. The same goes for the support packs. 7) Can anyone tell me technically what are the advantages Main Advantages of BW 3.0 over 2.1 ? Sol: There are tons of advantages. You should consider downloading white papers and presentations from SAPnet or viewing help.sap.com to see them all. Here's a brief list, without detailed explanations: - XML support for reading and writing metadata and data. - Web application designer: graphically design and publish web reports and web applications. - Open Hub Service for distributing data to other tables/systems on a schedule. It supports selection, projection, and aggregation. Integrated into the BW monitor. - Process Chains and graphical scheduler for automating complex processes. Supports branching, condition testing, etc. Enables you to schedule all admin functions in a graphical, drag-and-drop fashion, including extracts, index drops and rebuilds, activating data, compressing, rolling up to aggregates, etc. - Transactional ODS objects with read/write API. - Transfer rules for hierarchies. - Subtree insert and subtree update for hierarchies. - Active attributes on hierarchies (sign flip, etc.) - Load, stage, and merge master data to ODS. - DB Connect Service allows BW to extract data from any DBMS supported by SAP R/3. - Formula editor in transfer and update rules to avoid ABAP coding. - Toolbox of standard transformations such as substring, look up data in a table, concatenate, offset and length, arithmetic operations, etc. - Parallel loading into a single ODS object. - Secondary index maintenance for ODS objects built in to BW Admin Workbench. - Multiprovider: Use ODS objects, master data, and cubes in a multicube (now called multiprovider.) - New InfoSet Concept: join tabular data together and make it available for reporting. You can join ODS objects, master data tables, etc. - Archiving: Built-in tools to archive data on a schedule using selection criteria. - Integrated into BW monitor; archived data deleted from InfoCubes and corresponding aggregates. Support for re-loading data from archives as necessary. - BW Monitor includes all activities of process chains, open hub service, and archiving. - Use multiple hierarchies in a BW query. - Use a mix of hierarchies and characteristics on rows or columns in a BW query. - Use restricted key figures in a calculated key figure. - Generate static, offline web reports. - BEx Cell Editor: In queries with two structures, you can now reference cells at the intersections of rows and columns. 8) Maintaining Master Data in BW What is the best practice to maintain Master data in BW as Master Data is changed in R/3 very frequently? For certain Master Data objects (eg. Maintenance Order, Service No.) no "delta upload" option is available, only "full update" is possible. How do we keep this incremental data in sync with R/3? Sol: For some objects, we implemented what we call a "manual delta." This requires that you have a datefield in your datasource (a create-date and/or a change-date). This field has to be a selection field in BW. If you can make sure that this date can only be in a certain range (if it is a posting date for example, you only can post to open periods), you can load this range instead of everything. We load Document-masterdata that way and restrict on the current and the prior month or period. For other objects, where we do not get that many records, we just do a full load every night.

1. If Master Data Volume is Low (Aprox < 10,000 Records) then you can Load Daily. It will take 5 Minutes. 2. If Volume is Low then you can Load data based on the Create/Update date selection parameter. This way you can Pass Date as parameter in Info Package. 3. If there are no Date Parameters, find out if there are one or two tables that read the master data when update date is available. Then you can generate custom delta extract based on View and Delta Date. 4. If no other options are available, you may need to do something based on your business requirements (Ex. Daily only Required Master Records...) 9) Partitioning Performance Aspects for BW What types of partitioning are there for BW? Sol: There are two Partitioning Performance aspects for BW: A) Query Data Retrieval Performance Improvement: Partitioning by(say) Date Range improves data retrieval by making best use of database [data range] execution plans and indexes (say Oracle database engine). B) Transactional Load Partitioning Improvement:- Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA and Cubes by infopackages (Eg.without timeouts). 10) How SAP's Strategic Enterprise Management (SEM) can work as a BI tool? Sol: SAP's Strategic Enterprise Management (SEM) works with Business Information Warehouse (BW) and can either sit on the same instance in a one system architecture or SEM and BW can be on different servers in a 2 system architecture. If you want mostly a BI and datawarehousing tool with SAP standard content, then you are talking more BW. SEM is composed of 5 components / modules: 1. BPS - Business Planning and Simulation 2. CPM - Corporate Performance Monitor with Balanced Scorecard and Management Cockpit 3. BCS - Business Consolidation 4. BIC - Business Information Collection (able to link documents, etc.) 5. SRM - Stakeholders Relationship Management The main focus so far is on BPS and there is some increasing pilots and projects on CPM and BCS. BIC may be added to a main focus and SRM is still very new. SAP's BW-SEM does not go directly against BO and Microstrategy although there is some overlap and competition. For datawarehousing and reporting, some uses BO, Cognos, Arcplan and Crystal Reports as an alternative front end to BW's Business Explorer Analyzer based on Excel. The new version BW 3.0 also have some web based functions for query definition. 11) Tablespaces in SAP I was made to understand that there is only two tablespaces in SAP - one for data and the second one for Index. Can we have more than two tablespaces? Sol: Yes, you can add whatever custom tablespaces you want. As a matter of fact, this is a recomendation from SAP. If you have VERY large tables, they should be on tablespaces of their own. You can add custom tablespaces if there is a need. There are a couple of different scenarios: 1. Large custom tables that need a seperate tablespace (in case you don't want to keep them in PSAPUSERxx). 2. Large SAP tables that you want to manage seperately. You can check SAP documentation or the OSS notes for further assistance. 12) Transferring Legacy Data Where can I find information about transferring data from a legacy system to SAP R/3 4.6c? Is there a sample of the text file that is needed by the Batch input program when transferring the legacy data? Sol: If you are using LSM or CATT to transfer your legacy system data you can generate the layout for your text file from SAP. I suggest that you download the Data Transfer Made Easy from SAP Labs http://www.saplabs.com/. 2) Adapted from response by Michael on Mon, 21 May 2001 http://www.openitx.com/archives/archives.asp?i=43027 You can take a look at the 'Data Transfer made Easy' book. It provides layouts, techniques, etc. for the most common loads. You can find it at SAP labs site.3) Adapted from response by Mahesh on Mon, 21 May 2001 http://www.openitx.com/archives/archives.asp?i=43028 Use the LSMW (Legacy System Migration Workbench) facility of SAP 4.6C. From my experience it is very convenient to transfer legacy data to SAP. Also you can attach BDC programs to it also to cater to almost all the data migration issues. Try also taking a look at the CATT Tool, SAP's Computer Aided Test Tool. The tool can more or less replace the need for batch input programs.

With a little experience you can generate a "test module", which can read data from a tab delimetered text file (e.g. from Excel). The module gives you an example of how your data should look, and it only takes five minutes. You simply record the first manual transaction, and then you play the recording using the textfile as input. 16) After a successful transport from BW development to QA, when trying to save a query it pops up with a message "Cannot save query due to error in transport." Can someone please help? Sol: In the Admin Workbench, under the Transport Connection, you will see two buttons on the toolbar that open windows needing to be maintained in regards to this problem. One button is called "Bex Request" which is where you define the change request that all saved Bex queries should be put in. The other button called "Bex Development Classes", defines the development class that the Bex Request should be put into. Once you maintain this information, the error message should go away and any changes to Bex queries should be referenced in the change request that you have created. There is also a button "Object Changeability." The Bex request & classes were maintained, but when Object Changeability was selected, the first one was set to "Cannot be changed." Once this was changed to "Changeable" all is well and running. 17) Calling BW Query from ABAP Does anybody know how to call a BW Query from an ABAP program? Sol: You can call the cube read function "RSDRI_CUBE_READ" from within an RFC enabled function in BW. This function can then be called from R/3. You will have to duplicate any of the query calculations in your ABAP but you will get all the cube data (with exception of attributes which can be derived). You could try the reverse - create a report to report interface for the query in BW and pass the param values of the query navigation step to an ABAP program on R/3, or to an R/3 transaction. 18) I am having a serious problem. I am loading master data to 0Vendor. And in there some names of European companies have spl characters like and . Since these are not separate characters, I find it quite difficult to eliminate them. I feel there is a functional module that converts spl caracters as these to normal alphabets. But i dont know the names. Tried my best to search to no avail. Could anyone help me with this. As a result of this data cannot be uploaded from PSA to Cube. Sol: There is a very nice transaction for this, try --> RSKC 19) We presently uses Apo systema nd we are using the BW component of APO system. I want to display some data from APO table directly. For this i created data source via sbiw transaction and infosource, basic infocube and update rules and infopackage are completed. Later i want to load data in to PSA . The requst is in still running stage.The info as follows when we see in monitor. bUt there is no short dump. Info in Monitor-status Tab Data not received in PSA Table Diagnosis : Data has not been updated in PSA Table . The request may still be running or there was a short dump in BW. Procedure :Look for the short dump belonging to your data request in the short dump overview in BW. Pay attention to the correct date and time in the selection screen. You can get to the short dump list either using the Wizard or following the menu path "Environment -> Short dump -> Data Warehouse". Removing errors :Follow the instructions in the short dump. Info in monitor-Detail tab :Extraction messgae-error occured ,Processing data--No data But if i use the same data source and try to create Remote cube and display the data by listcube, i am able to see the data in the same data source. Sol: Please find out whether PSA tables are created and are active or not? To check this, go to psa from RSA1 transaction. You can also check tables using transaction SE11. Please note PSA are trasnparent tables one is detailed and other is summarized tables. If they are not active repeat the activation from PSA screen. Also check your info package whether it is checked for the PSA update. Thanks for your help. I checked for whther PSA table is activated. It is activated. Also in the update rule i was given the option as ONLY PSA and Selected subsequently in data targets. I feel all are ok. But one thing is whenever i am going to Monitor system is asking for login ID. I am giving the login id. then when i go to monitor system is giving message as data is not updated in the PSA. when i go to the details tab i am able to see warning message as Extraction message(error occured) when we go inside of this it is showing as 1 record sent and 0 records received and processing data packet(No data).

Could you please help on this. But when i use the same data source for remote cube i was able to see the data. All the above problems i am facing in Basic cube. Pl help with your feedback.

20)I have to convert from a po price unit to a material cost unit and evrytime bothe aren't standard.like we can have $25/lb(po price) and $100/kg (mat std cst).or another record can have $20/kg (po price) and $50/lb (mat std cst).but also its not always true that only kgs and lbs are used.we also have ea(each ) and G(grams),and sts and several other units.but all of them need to be converted into mat std cst unit. CAn anyone advise me on which is the bestplace to write a code and what would be the logic in the code as i am very bad at coding.just cant think of any logic. plz help. Sol: From SE37: CONVERSION_FACTOR_GET For every amount or quantity field that you import in BW (i.e. like via the standard extractors) there is also a currency/unit field. This information is already added in R/3 as you can see with RSA6 (in the R/3 system). Take an extractor and view the fields. Double click on a field with an amount or quantity. A window will pop-up that will show you the DATA ELEMENT, DOMAIN and also a reference to the unit- or currency field. In many cases this is the field with a name like HAERS, WAERS or something similar. If the target unit/currency is not already in the cube you will need to add the appropriate field (keyfigure) (like 0CURRENCY or 0UNIT). If the appropriate fields a already available you may want to change the content of the field (like x inches = y meters). In this case you will have tho change both the quantity and the unit field (or the amount and currency field). Please refer to the links already given on how to do that. You can add unit conversion to the transfer rules or to the update rules. This depends on many things and you must make a decision on where you do the conversion. If you need help on how to add the apropriate ABAP code just post a reply in this group and I will give you an example.UNIT_CONVERSION_SIMPLE 21) What is the feature of Delta queue? Sol: It collects the data from R/3 and store it within the delta queue in R/3 unitl next delta extract. The DataSource reads the Delta Queue data and push it to BW. 22) How do we see the data thats collected and stored in the delta queue? Sol: Go to Transaction RSA7 on R/3 to view the contents of the delta queue. Note only extractors that have been suscessfully initialized will show up in this queue. 23) I am getting error when i am uploading the data in PSA... Error in conversion exist CONVERSION_EXIT_CUNIT_INPUT i have used 0UNIT as unit for one field. could you please help me how should i rectify it.. Sol: Did the data you want to upload comes from Flat file? If yes, I think the error occurs becomes your try to import data in a wrong format. In SAP, the Units you see (KG, L, TON, PCE, BAG...) and the Units that are saved in the tables (and so the one that must be in the flat file) are different. Example: (in french) unit displayed: SAC unit store: BAG unit that must be in the flat file: BAG You will find all the correspondances in the transaction CUNI. Hope it helps you &&&& Units field in the file should be before the quantity field. 24)I am in process of creating a Data Model based on following datasources, 0FI_AP_4. Custom Datasources based on R/3 table EKBE Custom datasource based on R/3 table PAYR , I am confused how best I can relate these datasources to collective data at one place. Datasource on EKBE & PAYR do not have delta update whereas 0FI_AP_4 have delta update. So far I have created ODS on each datasource and thaught of putting data in cube ,but there I have problems of mapping characterstics. Sol: I would suggest that EKBE and PAYR data may go to a different customized cube and BC content information remain as they are. You can multi cube the BC content cube and the customized cube. It becomes much easier to maintain both the cubes, separately. As far as delta is concerned, I would suggest the following.

1. Always do the full load, everyday. 2. Doing full load is time consuming.Therefore, you must find out some attributes which be used to minimize full dataload. Example Fiscal year. Load full data once for year 2001, 2002 and load 2003 every day . Please note that this is just an example. SAP as of 2001.2 is providing an option to create delta for generic datasource, based on date and timestamp. If you have datetime in ekbe, then delta creation is easy. Also, you could think of delta creation, by maintaining your own tables in R/3 which has image of prior data, which is already sent to cube. May be a little difficult. In our case, we simplified the delta process using fiscal year, completion status etc., So far so good. data model is art and no solution is a right solution and every solution has as advantage and disadvantage. It depends on the report requirement and I believe that it is always an iterative process. Surely, some people will disagree with me. 25) Can any one tell the name of the function module or program where queries can be scheduled in background send a mail to the user. Sol: Can be done via the Reporting Agent (this reports on exceptions, but create these exception to infinity and you will get everything) 26) I have read at few places that delete the indices, load the data and recreate the indices. This is done to improve the performance for the query. I could not understand this 100%. It was a question asked to me in interview. Why do we have to do this. Sol: 1. Delete the indices will save time of data uploading especially for volumns of data uploading; 2. Recreate the indices will reduce time to read data(query execution) and provide good performance;

27) I want to extract billing information. I can't use LIS extraction (it is not active). I would like to use LO cockpit extractors (MC13VD0HDR and MC13VD0ITM, i.e. 2LIS_13_VDHDR and 2LIS_13_VDITM). I need to add data from KONV table. Can I add this data enhancing the data source and writing the corresponding data in user exit for extraction (as in LIS extraction) or do I have to consider any additional thing? Sol: It is not possible to extract the conditions from KONV table straight away. Please refer to various oss notes which are loud and clear. However there are serveral ways of getting this information in to the system. 1. We can use CO-PA info cube to report sales and order condition information. The pre-requisite would be that each condition type must be assigned to a seperate value field. 2. Create an ODS for KONV conditions. Create a generic extractor which either looks at KONV table directly (Which is very time consuming as KONV is not a trasnparent table) or create a Z table in R3 which extract information from R3 SD tables on the basis of time change. The data can than be extracted to ODS using key field over writes. 27) I have a job running load from one ods to other source ods is getting loaded from R/3 at 0:00 AM and hardly takes 2-3 mins this job is scheduled at 2 am (ods to ods) still it gives a message "Processing (data Packet) : No Data" If i run manually it goes fine please give your input ? Sol: It seems that no data extracted from R/3 in the first place. Did you check your first ODS active data table to see if any data actually loaded? Do you have any report that is running and pulling some data from your first ODS? 28)Can any one let me know how to stop the periodic Scheduling, like suppose we have specified the date and time at R/3 in Job Control for LO Cockpit ie in LBWE.Now i don't want to schedule. How to stop that in R/3 as well as in Bw Sol:SM37 29) Has anyone created a Hierarchy using more that one characteristic as reporting nodes? What I need to do, is have a reporting node for Company (0COMP_CODE)and have as lower levels of that node, reporting node(s) for Plants within

that company (0PLANT). I have maintained the 0COMP_CODE InfoObject and assigned 0PLANT as an External Characteristic in Hierarchies. But this only allows me to have 0PLANT as a non-reporting node in the hierarchy. Sol: Have you tried this option? You can make characteristics displayed as hierarchy in BEX by right clicking on rows structure and choose hierarchy. 30) This white paper does a great job of explaining Compaqs perspective on several key factors and strategies to consider when sizing an SAP BW system. Considerations for Sizing an SAP BW Solution Planning for and deploying a large-scale SAP BW (Business Information Warehouse) solution is a complex, iterative process . Sizing the system resources is a major part of the process, complicated by the fact that there are no hard and fast rules due to the uniqueness of each data warehousing situation. In contrast to traditional OLTP systems (such as SAP R/3) whose resource needs follow a fairly predictable and measurable path, OLAP systems (such as SAP BW) tend to grow exponentially with use and experience. The following is Compaqs perspective on several key factors and strategies to consider when sizing an SAP BW system.

1) Does any one know what is the purpose of look up table and when will we use that in routines. An example would be great. What is difference between return table and look up table Sol : I don't know what you mean by look up table, but return table is used to post many lines into a cube for one line from the infosource. Example : You receive these informations (kind of planification) Year | Sales Volume | Sales Organization | But, in your cube, you have these characteristics : Year | Sales Volumes | Salesmen Salesmen belong to a Sales Organization. With return table, you can split the Sales Volume to be done into several lines (as many lines you have salesmen into a sales organization). With Abap code, you get how many salesmen you have in this sales org, wich of them, then you fill the return table (internal table) with the sales volume divided, and you post all the data. One line in the infosource, many lines in the cube.. Now, look up table ??? What is it ??? Where did you see that ??Thanks for your reply. But on what basis you will divide the sales volume amoung salesman. Are they depend on business rules? Coming to Look up table, this was the question asked in an interview.To divide the sales volume, you must have a business rules, it is right. In my example, it consists in searching how many salesmen does the sales org have, and their codes. Then, you fill the internal as many time as you got salesmen with their identifiant. Look up table, may signify to look into a transparent table to get values (conversion with rules ..) There are many possibilities. I would load the entire tables into internal tables, and then read the internal table instead of reading the database. Better performances... 2) Just noticed something a bit strange on the scheduler processing modes, release 2.1C, and was wondering whether someone could give me some clarification on this: I am creating an infopackage based on a cockpit extractor with delta. When trying to select the processing mode, the only mode I can select is 'PSA only...'. This happens only when my target is an ODS, but not when it's a Cube. When trying to save, I get following message: "Loading not possible - choose transfer method PSA The DataSource requires serialization with delta process 1.. You can, therefore, load data only into the PSA, because this is the only way to guarantee serialization. This is necessary, above all, for delta loading." Does anyone have an explanation for this. Why does it happen only with ODS targets? Sol : because in a cube, you only add records, so no matter in which order they come from the source system while in an ods, you sometimes delete or change records, depending on 0recordmode, so the sequence of records is important 3) I loaded data from the dtwo datasources 2lis_12_vcitm and 2lis_12_vcscl to the Infocube 0SD_CO4. When i ran a sample query on this cube to view matrl, customer, ship date and qty.. for some materials i found blank date with some qty delivered. But when i cross check my query result with LIPS dates are available for those field I am 3.1C any Ideas Sol : Have you tried using the RSA3 to check whether or not the data is actually received from the extractor ? 4) Our change run failed and we are stuck because there is a lock. I've tried the usual suspects (sm12, rs12, sm50 etc) and even had the Oracle server bounced...and still there is a lock! How? Anysolutions Sol : Delete the entry in table RSDDAGGRMODSTATE where "Change run finished field" is not marked with an 'X'. Then you will be able to run the change run again. I've also found that the table RSDDCHNGPROT (which controls the change run targets) can have spurrious entries too when this is the case. 5) I am using BW 3.0. I have a cube with 3 aggregates. One of the aggregates can't be activated by any way. I found that the issue is in the P index from the F-fact table of this aggregate. But like the aggregate is not activated, no transaction is able to repair it because it doesn't find it (rsrv, rsdcube, repair aggregates indexes,...) I found notes about the P index problem but they always refer to InfoCubes or activated aggregates, where those transactions work. I'm not in Development so it's not easy to make a copy and delete it. What I did before this problem was this: In the cube one

request wasn't compressed but in the aggregates. The request was deleted from the cube and later another load (not init) was made. But the aggregates didn't change. So I deactivated them. After that I activated them. But one of them wasn't able to be activated. The other aggregates were correctly activated. Sol : Have your Basis person create a P index for you. I had the same problem because the index was not available. Once Basis has created the index, then BW should allow you to save and use the aggregate.That's a good idea but the aggregate is in integration so I don't know if that can be directly done. Moreover, the aggregate worked before. Any way I will ask if it's possible.?If that doesn't work; I don't want to solve it thru a transport (this time I did because I needed to load the cube) so I'd like If someone told me how to prevent it and even more 6) We have to do a reload on the InfoCube. We have new transfer rules for the InfoSource. If we are loading from PSA, it is going to load the data after applying new rules, but we have too many requests in the cube. Reconstruction on InfoCube processes transfer rules or not ? How reconstruction works ? Sol : use reconstruction; it acts like if you were taking PSA packets one by one manually, so it's going through the rules I thought that you will definitely answer and you did. Thank you very much. If we have added new fields into the InfoSource, will old PSA's still load the data ? ( we don't care if the new fields have null values ) IF Reconstruction is going to work the same way as PSA, Then for Reconstruction, do we need to check whether all the PSA's are available in the system or not ? PSA reflect what the datasource load into BW. If you have 10 fields in datasource, then 10 in PSA and only 8 in infosource; and now you change the infosource to use the 2 remaining fields, no problem. If what you did is to add 1 field to the 10 of the datasource, then you will have a problem. For reconstruction, you need a synchronization point (init package) and all the subsequent delta packages up to 'now'. If one of the packets is missing, I think that you will notice it directly in the reconstruction tab of your infocube. But for sure, you will not go without one packet by error, because the system will stop as soon as he detects a gap in the packages. Now, if you were working only with full loads, you need to check this process manually, I guess, as the system will not prevent you from loading full load 1 and not full load 2 and then full load 3, as there are no links between the loads. Ch If you offer a fish to a man, he will eat one day. If you teach him to fish, he will eat his whole life long. 8) We have a requirement o compute a key figure in our query . This is working fine when the query output is displayed initially but when we drilldown ot a lower level from cost center to cost elements the cal. does not work. The reason for this is that the denominator in the computation is not avail. in the cube at the cost elemnt level but only at cost center level. Is there a way to pass a constant value in a variable when a user drill's down in a query and use it in the cal. key figure. Or any other suggestion is also mose welcome Sol : You need to first restrict the key figure by the characteristic for which there is no data avail. in the cube ( Cost element in my case ) and for a value of #. Then set the flag "constant selection" for the characteristic by right clicking. This will force the key figure value for a blank charac. value to show up for all remaining charac values in the cube . I am still trying to figure out what does the constant selection flag in properties of key figure mean ? It seems to be doing some kind of cumulation and that value remains constant during navigation butlogic of cumulation dosen't make any sense.

9) We refresh R/3 from PROD to QA once in a month. My question is 1) What will happen to the Delta Initilization which we did before the refresh? Can we use the same initization and delta after the refresh or we need to re initilize once again after each refresh? We are trying to avoid initializing again after each refresh. Could some one please explain me the best practices before and after the R/3 refresh Sol : I think it is not possible, the existing QA will be overwritten by PROD when refreshed. If you have shortage of hardware, you can try having existing QA and a copy of PROD into one machine but of course as separated two systems. Perhaps this alternative is a better choice! 11) I am not able to activate data in ODS. I am getting the data into [New Data]. But when i am trying to activate data in ODS, it is not moving the data from [New Data] to [Active Data] & [Change Log]. Sol : Is the Data Activation slow ,or data activated but not seen in ACTIVE or activation gives u error If u get an error ist the detailed error here or the steps u follow 12) I am trying to upload master data but wewhen i schedule the infopackage for master data i mean attributes, it giving the same problem as mentioned. Error message when processing in the Business Warehouse Diagnosis An error occurred in the SAP BW when processing the data. The error is documented in an error message. System response The error message(s) was/were sent by: Update Procedure Check the error message (pushbutton below the text). Select the message in the message dialog box and select the long text for further information. Follow the instructions in message. can any one focus a light on that how to rectify this problem?

Sol : direct update or flexible update ? data from R/3, flat files or other ? Typology of attributes : name, type, time dependant ... Everything + the transfer rules + updates rules if exist 13) If I'm using the standard infocubes, as the Sales Overview Infocube. Do I have to map the 0RECORDMODE field in the Transfer Rules? If so, to what field in the Communication_structure should I map the 0RECORDMODE field. Sol : Search for 0RECORDMODE in this forum or OSS Notes.U can search in datastructure/transfer rule 0recordmode,its a system defined info object and pull to communication structure.you can activate there , it will avaialble in transfer rules. 15) I defined date as a key figure, AAAAMMGG, i'm able to view data in my ODS, but when i try to create a query date format is wrong. Sol : Maybe you get a number ? What looks like your wrong format ? If it is a number, display your figure with a formula, where you will find option to convert date to numbers and numbers to date. 16) I have two basic cubes joined by a multi-cube. The sales cube has the sales history key figure (and various characteristics). The saved plans cube has the consensus plan key figure, but an extra characteristic to the sales cube in that the month is written as a value to select by when the InfoPackage writes data to cube. When I run query with the sales KF from the sales cube, and the consensus plan from saved plans cube, both KF values are returned. When I insert the plan month as a characteristic to query, I am losing my sales history, presumably because there is no charac in that cube. I have searched the forum and tried to look at the identification of the characteristics in the multi-cube etc. but to no avail. Sol : I think you answered your own question. When you add the char that does not exist in one cube, of course you are going to not see data fro your sales history as that data does not contain this characteristic. You are essentially wanting to report on something that you don't have.. 17) I am trying to activate the data in ODS. I am hitting the error message : DataSource 80FITX_O03, ... does not exist in source system of version A Any y idea, how to resolve this issue ? Sol : replicate your datamart datasources... 18) We have a requirement o compute a key figure in our query, This is working fine when the query output is displayed initially but when we drilldown ot a lower level from cost center to cost elements the cal. does not work. The reason for this is that the denominator in the computation is not avail. in the cube at the cost elemnt level but only at cost center level. Is there a way to pass a constant value in a variable when a user drill's down in a query and use it in the cal. key figure. Or any other suggestion Sol : You need to first restrict the key figure by the characteristic for which there is no data avail. in the cube ( Cost element in my case ) and for a value of #. Then set the flag "constant selection" for the characteristic by right clicking . This will force the key figure value for a blank charac. value to show up for all remaining charac values in the cube . I am still trying to figure out what does the constant selection flag in the properties of the key figure mean ? It seems to be doing some kind of cumulation and that value remains constant during navigation but the logic of cumulation dosen't make any sense. 20) How do I set up sender/receiver assignment in Business Explorer. I don't see any option for this under Business Explorer - Query? Sol : Hi Pete, you can look for sender/Receiver assignment through jumping between the queries (Transaction RSBBS)you don't set it up in the front end (BEx), but at the back end (BW server) 21) We have to do a reload on the InfoCube. We have new transfer rules for the InfoSource. If we are loading from PSA, it is going to load the data after applying new rules, but we have too many requests in the cube. Reconstruction on InfoCube processes transfer rules or not ? How reconstruction works ? Sol : use reconstruction; it acts like if you were taking PSA packets one by one manually, so it's going through the rules If we have added new fields into the InfoSource, will old PSA's still load the data ? ( we don't care if the new fields have null values ) IF Reconstruction is going to work the same way as PSA, Then for Reconstruction, do we need to check whether all the PSA's are available in the system or not ?PSA reflect what the datasource load into BW. If you have 10 fields in datasource, then 10 in PSA and only 8 in infosource; and now you change the infosource to use the 2 remaining fields, no problem. If what you did is to add 1 field to the 10 of the datasource, then you will have a problem. For reconstruction, you need a synchronization point (init package) and all the subsequent delta packages up to 'now'. If one of the packets is missing, I think that you will notice it directly in the reconstruction tab of your infocube. But for sure, you will not go without one packet by error, because the system will stop as soon as he detects a gap in the packages. Now, if you were working only with full loads, you need to check this process manually, I guess, as the system

will not prevent you from loading full load 1 and not full load 2 and then full load 3, as there are no links between the loads.

You might also like