You are on page 1of 134

ADVANCED DATA MODELING IN SAP HANA

DMM360
Exercises / Solutions
Las Vegas / Srinivas Rapthadu / SAP Labs US
Las Vegas / Werner Steyn / SAP Labs US
Bangalore / Vaishnavi Dharmaraja / SAP Labs India
Bangalore / Rajesh Panditi / SAP Labs India
Barcelona / Yves Augustin / SAP SE
Barcelona / Christoph Morgen / SAP SE

DMM360

BEFORE YOU START


In this hands-on workshop you have the opportunity to work on different exercises depending on your
level of experience and interest. Due to time constraints we recommend that you look through the
different exercise and then decide which ones you want to work on first. The exercises are
broken down into 3 tracks (Cyclist, Snowboarder & Astronaut) increasing in difficulty each track
consist of 5 independent exercises. You should be able to finish 1 track within the allowed time; you
may also do as many exercises or to select exercises from different tracks.

CYCLIST
Exercise 1-1: ATM Alerts (10 Minutes / 33 steps)
Detects unusual high and low bank withdrawal transactions.
Includes the usages of the Debugger, Visualize/Explain plan, SQL expressions, and filters.
Exercise 1-2: ATM Activity (10 Minutes / 32 steps)
Calculates and compares the number of withdrawal vs deposit transactions per month.
2 approaches (Restricted Key Figures vs Union branches), conditional exception-aggregation.
Exercise 1-3: Soccer players (20 Minutes / 45 steps)
Calculates the most valuable soccer player across teams.
Includes the Rank node & Visualize plan.
Exercise 1-4: CO-PA Analyses (30 Minutes / 38 steps)
Showcases Actual vs Planned profitability Analyses.
Includes Union, Star-Join, Text-Join (multi-language support).
Exercise 1-5: Market Basket Analysis (30 Minutes / 20 steps)
Analyze product combinations that frequently co-occur in transactions.
Exception aggregation, multi-level aggregation, currency conversion.

SNOWBOARDER
Exercise 2-1: Data-Column Masking (10 Minutes / 26 steps)
Column level security, obfuscate sensitive column content based on user rights.
Stored procedure, input parameters, expression syntax.
Exercise 2-2: Cumulative Sales & Slowly Changing Dimensions (20 Minutes / 74 steps)
Calculate rolling sales by month and product.
Temporal Join, Dynamic Join, Keep Flag & Transparent filter Flag.
Exercise 2-3: Dynamic Time Based Analysis (20 Minutes / 44 steps)
Current Year Quarter sales vs Previous Year Quarter Sales.
Time series, SQL table functions.
Exercise 2-4: Restricted Regional Sales report (20 Minutes / 53 steps)
Row level security, restrict users to their assigned regional areas.
SQL Analytical Privileges, Roles, debugging authorization errors.
Exercise 2-5: Dilbert HR Org chart (20 Minutes / 59 steps)
Calculates total sales for both managers and their employee.
SQL Analytical Privileges, SQL Hierarchies and SQL-Hierarchy Prompts.

DMM360

ASTRONAUT
Exercise 3-1: Hot vs Cold sales data (20 Minutes / 49 steps)
Model current sales vs historical sales using Smart Data Access design principals.
UNION pruning configuration table, explicit, implicit pruning.
Exercise 3-2: Analyze US presidential State of the Union speeches (15 Minutes / 40 steps)
Calculate linguistic differences by analyzing unstructured text data.
Text Analysis, exception aggregation.
Exercise 3-3: UFO sighting predictions (45 Minutes / 106 steps)
Analyze and predict historic UFO sightings near airports using GEO Spatial capabilities.
Spatial Joins, Spatial expressions, Predictive Analytics.
Exercise 3-4: Inventory Management - FIFO (20 Minutes / 23 steps)
Calculate the profitability by item (cost vs sales) using First-In-First-Out (FIFO) method.
Advanced SQL knowledge, functions, and time series.
Exercise 3-5: Non-cumulative Daily stock balance (20 Minutes / 36 steps)
Calculates the daily stock balance per product based on inflow and outflow.
Advanced SQL knowledge, functions, cross-joins, dynamic exploding and imploding of data.

DMM360

SETUP
Total steps (11) - 5 Minutes
Explanation

Screenshot

1. Start the SAP HANA Studio


using the Short-Cut >
SAP HANA Studio DMM360
Start > Search for DMM360 >
Click SAP HANA Studio
DMM360

2. Log-On to SAP HANA (M43)


using the sample user
DMM360_BI
Within the Systems view select
user DMM360_BI > Right Click
> Log On

3. Enter the password


Welcome16

DMM360

4. Add your assigned modeler


User
Select DMM360_BI > Right
Click > Add System with
Different User

5. Enter your User Name:


(Replace _XX with your
assigned User ID)
Password: Welcome16

6. Click Finish

7. Create a local Repository


workspace
Within the Repositories view
Select your own User
Connection > Right Click >
Create Repository Workspace

8. Select the SAP HANA System


M43 (Your connection)
-

Check the Use Default


Workspace checkbox

Browser & Select the


Workspace Root directory
D:\Files\Session\DMM360\
Repository

DMM360

9. Check-out the exercise material


Make sure to work with the
(Default) Repository
Select both the sol and wsx
directories > Right Click >
Check out
(Note: Replace wsX with your
assigned WORSKSHOP
number)

10. Click on the Navigator view and


familiarize yourself with the
workshop material
Important!
Use the Navigator view to
create the content for this
workshop instead of the
Systems view
11. Within the Navigator view
expand the sol folder, notice
the 15 exercises broken down
into 3 tracks.
Your assigned work area
(package) is located under
folder dmm360\ws<x>\<xx>
(Note: Replace x with your
assigned workshop number
and replace xx with your
assigned User ID)

DMM360

EXERCISE 1-1
ATM WITHDRAWL ALERTS [10 MINUTES]
In this exercise the solution is already done for you; you will review the model and use the debugger,
visualization and explain plan to look at query execution. The model consists of multiple levels of
aggregation and parallel calculations; the average daily withdrawal and the total daily withdrawal is
calculated per customer followed by the deviation withdrawal that is calculated by subtracts the
average daily withdrawal from the total daily withdrawal. The results show deviations that are both
higher and lower than the average withdrawal amount.

Total steps (33)


Explanation

Screenshot

1. Open CvAtmAlertQuery
Within the Navigator view
expand folder dmm360 > sol >
1-1

2. Review the sample dataset of


Withdrawals
Right Click on the AtmActivity
table within the Withdrawals
projection node > Data Preview

DMM360

3. Click on the Raw Data tab >


Notice (for Account 1000001)
the unusually withdrawal
transactions of $800 on Jan 14th
and $10 on Jan 26th.

4. Select the Daily Average


Aggregation Node

5. Notice the Posted Count


Column. Count Aggregation is
used to calculate the total
number of withdrawal
transactions.

6. TotalWithDrawal is the
aggregated sum of all
withdrawals

DMM360

7. The Calculated Column


DailyWithDrawalAverage is
calculated by dividing
TotalWithDrawal by the Posted
Count.

8. Right Click on the Daily


Average Aggregation Node >
Preview.

9. The result shows the Total and


Daily Averages and total
number of withdrawals per
account.

10. Select the Join Node

DMM360

11. The left branch of the Join node


(inner join) calculates the Sum
of all withdrawals per account.
This calculation is used in the
final calculation.

12. Select the topmost aggregation


node.

13. The final deviation amount is


calculated using the Daily
Withdrawal Average minus the
Total Daily Withdrawal amount.

14. Start the debugger to follow the


calculation execution flow

15. Click Execute

16. Select the Daily Average Node

10

DMM360

17. On the right hand side of the


screen within the Node Query
tab click Execute.

Note: The results are shown,


total withdrawals, average daily
withdrawal and number of
withdrawals.

18. Select the Join node

19. Click execute to see the join


results

11

DMM360

20. Click on the top most


aggregation node > Execute

Notice the deviations on


January 14th, 15th, 26th for
Account 1000001

Notice the deviations on


January 1 for Account 1000002

21. Close the Debugger and


execute the Explain Plan.
Open 1-1.sql, highlight the
SQL-1 statement > Right Click
> Explain Plan
Notice the WHERE clause filter

22. Notice that both the WHERE


clause Account filter and the
Activity constraint filter are
passed down to filter the
dataset.

23. There are 25 total transactions,


after the filter is applied only 12
records are brought into
context.

12

DMM360

24. Execute the Visualize Plan.

Highlight the previous SQL


statement > Right Click >
Visualize Plan > Execute

25. Click No. Do not change the


perspective.

26. Click on Executed Plan

27. Open the Operator List View

Window > Show View > Other >


SAP HANA PlanViz > Operator
List

13

DMM360

28. Select Operator List

29. (Optional) Drag the Operator


List view to the left side of the
screen.

30. Within the Operator Name


search field, enter Basic
Predicate and press Enter
31. When the results appear below
> Double Click on the third
Basic Predicate line.

32.The Execution Plan (in the top


area the screen) indicates that
both filters (Activity & Account)
are applied to the lowest
database table level.

14

DMM360

EXERCISE 1-2
ATM WITHDRAWL ACTIVITY [10 MINUTES]
This exercise demonstrates how to calculate measures depending on certain dimensional values and
showcases 2 common modeling techniques (Restricted Columns vs Unions). Using the ATM
withdrawal data set from the previous exercise you are tasked to compare the Amount Deposited
against the Amount Withdrawn including the number or Withdrawal and Deposit transactions per
Account.

Total steps (32)


Explanation

Screenshot

1. Expand dmm360 > sol > 1-2 >


Open
CvAtmActiviyUsingRestricedC
olumnQuery

2. Select the Aggregation Node >


on the right side in the details
area > Click Data Preview

15

DMM360

3. The requirement is to create


the following measures:
-

Total Deposit Amount


Total Withdrawal Amount
Total Deposit transactions
Total Withdrawal
transactions

4. In the model Double click on


the Withdrawn Restricted
Column. Notice the Amount is
calculated only for Activities of
type W.

5. Similarly review the Deposit


Restricted Column. Notice the
Amount is calculated only for
Activities of type D.

16

DMM360

6. Open the Wcount Calculated


Column. Notice the type is a
measure and will only count
when the Activity equals W.
Note: The expression will
return either 1 or 0
7. Similarly the Dcount
Calculated column will only
count when the Activity equals
D.

8. Preview the model.

9. Your assignment: Create a


model using Unions (instead of
restricted columns) to render
the same results.
10. Copy 1-2.sql and
SAMPLE_CvAtmActivityUsing
UnionQuery into your assigned
work area package

11. Rename the Calculation Model


by removing the prefix
SAMPLE_
Hint: Right click on the model
> Rename

17

DMM360

12. Open the Calculation Model


The model is half done; it
already includes the
Withdrawal branch. Instead of
using restricted columns
individual input source
branches are used. Complete
the model by adding a Deposit
branch.
13. Drag a Aggregation node into
the scenario work area.

14. Rename the Aggregation node


> Deposit
15. Add the AtmActivity to the
Aggregation node.

16. With the Deposit node


selected add the following
columns to the Output
Account
Posted
Activity
Amount

Attribute
Attribute
Attribute
Measure

Hint: Add the Amount column


as an aggregated column.

18

DMM360

17. Rename the Amount column >


Deposited

18. Set the Keep Flag to True for


the Posted Column
Hint: A deposit of $500 were
made to account 100001 on
January 1st and 15th; due to
aggregation if the Date column
is not brought into context then
the count would be 1 which is
incorrect. The keep flag will
force the Date column into the
context even though the front
end query did not request the
column.

19. Right click on Activity > Apply


Filter

19

DMM360

20. Filter on <D>

21. Create a Calculated Column


called Dcount.

22. Select Integer as the Data


Type and enter 1 for the
expression

20

DMM360

23. Drag a connection line from


Deposit to the Union node

24. Select the Union

25. Click Auto Map by Name

21

DMM360

26. Result

27. Select the Aggregation node

28. Add Deposited and Dcount as


Aggregated Columns to the
output

29. Save & Activate

30. Execute (SQL-1) within 1-2.sql


Hint: Replace X with your
assigned session ID and XX
with your assigned Student ID

31. Results are shown.

22

DMM360

EXERCISE 1-3
SOCCER PLAYER GOALS [20 MINUTES]
In this exercise the Rank node is used to determine the most valuable soccer players. Your
assignment is to display only the 2 most valuable players per team. Additionally you are required to
dynamically direct the execution to a separate model for team prompts and unique team queries.

Total steps (45)


Explanation

Screenshot

1. Expand sol > 1-3


Copy both
SAMPLE_CvRankTeamPlayer
WithLookUpQuery and 1-3.sql
into your work area.

2. Paste the files into your


assigned work area package.
Note: Replace X with your
assigned workshop number
and XX with our assigned User
ID

23

DMM360

3. Rename the Calculation Model


by removing the prefix
SAMPLE_
Hint: Right Click > Rename
4. Activate the model
Hint: Highlighting the model
and click the green horizontal
arrow.
5. Open the Calculation Model.

6. The model consists of 2


branches; 1 branch calculates
player goals and the other
calculates team goals.
7. Select Player > Right Click >
Data Preview

8. Select Raw Data


Notice Mario Gtze has the
most goals in the league. The
second highest goal scorer
within the same team is Arjen
Robben (with 3 goals).

24

DMM360

9. Insert a Rank node between


the Join node and the Player
Node.
Hint: Drag the Rank node and
drop it precisely on the
connection line.

10. Click Yes.


Hint: If you do not see this
popup window then repeat the
previous step and make sure to
drop the rank node precisely on
the connection line.
11. Rename the Rank node >
GoalsByPlayer

12. Select the GoalsByPlayer Rank


node > within the Details area >
add all the Columns to the
Output.
Hint: Right click on Player >
Add All To Output

25

DMM360

13. As a result the columns appear


in the Output area.

14. Configure the Rank Node as


follows:
Order By: Player Goals
Partition By: Team Name
Select Generate Rank Column

15. Assign the Input Parameter


(TOP_N_PLAYERS) as the
Threshold value.
Hint: This will allow end users
to dynamically decide how
many players per team to
show. In our example we are
only interested in displaying the
top 2 players per team.

26

DMM360
16. (Optional) click on the Join
node > over in the Details area
ensure that the GoalsByPlayer
node is defined as the RIGHT
node. If not right click on
GoalsByPlayer > Swap as Left
Table
17. Select the Join node and Join
the 2 branches using TeamID
Hint: Select the TeamID column
on the left branch and drag it to
the TeamID column on the right
branch.
18. Add all the columns of
GoalsByPlayer (except the
PlayerID) to the Output
19. Select the Aggregation Node.

20. Add the following columns to


the output:
PlayerName
TeamID
TeamName
Rank_Column
PlayerGoals
TeamGoals

Attribute
Attribute
Attribute
Measure
Measure
Measure

21. Save & Activate.

22. Open the 1-3.sql file and


execute the SQL-1 statement
within the Student SQL section.

Note: Replace # with your


assigned workshop number
and replace ## with your
assigned user ID.
27

DMM360

23. Select your connection

24. Expected results are shown:


Notice Mario Gtze is first in
the list, followed by Robben. A
maximum of 2 players per team
are shown as a result of the
input parameter threshold.

25. Review the Visualize Plan, and


specifically pay attention to the
team filter.

Highlight the SQL-2 statement


> Right Click > Visualize Plan >
Execute
Note: Replace # with your
assigned workshop number
and replace ## with your
assigned user ID.
26. Click No if asked.

27. Within the Operator List view >


and within the Search field >
type Search on Table > Press
Enter
28. Double-click on the (Search on
Table) result line
Hint: To open the Operator List
View go to Window > Show
View > Other > SAP HANA
PlanViz > Operator List

28

DMM360
29. Expand Search On Table and
notice the WHERE clause filter
that is pushed down to the
Player Table.

30. Your next assignment: Create a


prompt that queries a separate
Calculation Model to show the
list of team.

31. Semantics Node >


Parameters/Variables > Create
a Variable VAR_TeamName

32. Use the predefined calculation


model for the value help:
dmm360.sol.13.CvTeamLookup
Attribute: TeamName
Multiple Entries: Checked

Filter Variable: TeamName

29

DMM360

33. Save, Activate & Preview.

34. Supply the prompt values:


Enter 2 for the number of team
players
Select multiple teams (Bayer
Munich and Paderborn) in the
background the Value Help
model is queried to show the
list of teams to select from.
Click Ok to execute the query

35. Results are shown for only 2


teams and a maximum of 2
players per team.

36. Your next assignment: Assign a


value help model to the Team
attribute; in situations when a
single attribute is queried
execution can be re-directed to
a separate model.

30

DMM360

37. Semantics Node > Select the


Team Name Column > Click
Assign value Help View

38. As before select the


CvTeamLookup model and use
the Team Name as attribute.

39. Save, Activate and Preview.

40. Choose the following Prompts:


TOP_N_PLAYERS: 2
Team Name: Bayer Munich

41. Click on Raw Data. The results


are shown.

42. Within the Analysis tab double


click only the Team Name.
Remove all columns in the
Value axis.
Notice: All 6 teams are queried.

31

DMM360

43. Notice when only the Team


Name column is selected then
execution is directed to the
Value Help model instead of
the central model.
Hint: Show the Log to see
which model is queried.

44. Consequently if additional


columns are queried then the
central model is executed with
the original input parameters.

45. Central model execution.

32

DMM360

EXERCISE 1-4
COPA ACTUAL VS PLANNED [30 MINUTES]
This exercise demonstrates the classical ERP CO-PA probability analysis scenario providing multidimensional insights into a companys product profitability. It implements various modeling techniques
such as Star Joins, Text Joins, and Unions to compare Actual and Planned data.

Total steps (38)


Explanation

Screenshot

1. Expand sol > 1-4 > Open


CvProfitabilityAnalysisCOPA
Query

2. Notice the Union node


combines both Actual and
Planned sales data.
3. Open CvActuals

33

DMM360

4. Notice the single fact table


CE1IDEA underneath the Star
Join. Open the CvLocation
Dimension.

5. Click on the Customer Join


node. Notice the Text Join
symbol [T]

6. Double click on the Join line >


Land1 to Land1

7. Notice the Text Join and


Language Column (Spras)

8. Close all the models.

34

DMM360

9. Your assignment: model


Planned Sales.
10. Expand sol > 1-4 > copy 1-4.sql
and the 3 SAMPLE_ models
into your assigned work area.

11. Rename the models by


removing the SAMPLE_ prefix.

12. Open 1-4.sql > under the


Student SQL section > execute
the solution (SQL-1)

13. Results are shown

14. Open CvLocation

35

DMM360

15. Drag a Join node and place it


precisely unto the connection
line.

16. Select Yes

17. Click on Auto Layout

18. Click the Add icon to add a text


table T005T to the Join node.

19. Drag a Join line between both


Land1 columns
20. Right click on Landx > Add to
Output

36

DMM360

21. Set the Join type. Double click


on the Land1 Join line and
Select Text Join > Select Spras
as the Language Column

22. Select the Projection Node >


Landx > Add to Output

23. Save & Activate

24. Open CvPlanned

25. Add the previously created


CvLocation dimension to the
Star Join

26. Search for CvLocation and


select the model from your
package

27. Drag a join line between


KNDNR and KUNNR
28. Double click on the Join line to
set the join properties:
Left Outer Join
Cardinality N:1
37

DMM360

29. Save & Activate & Preview

30. Open CvProfitability Analysis


COPA Query

31. Select the Union node and add


the previously created
CvPlanned model

32. Within the Union node > Click


on Auto Map by Name

33. Select the Aggregation node

38

DMM360

34. Add all the Planned Measures


to Output as Aggregated
Columns.

35. Save & Activate

36. Execute SQL-2

37. Note: Strictly informational


Due to the usage of Text Joins
the Landx column can be
displayed based on the endusers language preference.

39

DMM360

EXERCISE 1-5
BASKET ANSLYSIS [30 MINUTES]
This exercise showcases the concept of market basket analysis in which retailers seek to understand
the purchase behavior of customers. The example below answers the question How much is the
average customer order when one of the items ordered are a hand held device? In addition your
assignment is to enable currency conversion for this model.

Total steps (21)


Explanation

Screenshot

1. The following orders are used


in this exercise.
-

2 orders included Flat screens


3 orders included Handhelds
2 orders included Notebooks

2. Expand sol > 1-5 > Copy 15.sql & SAMPLE_CvBasket


Analysis With Currency
Conversion to your assigned
work area

3. Rename the model by


removing the SAMPLE_ prefix
4. Open the model

40

DMM360

5. To calculate the average order


total per category, take the sum
of all the order totals for that
category and divide by the
number of orders that included
that category.
6. Select the Max aggregation
node and notice how the Order
Total is obtained.

7. Notice the Max Aggregation for


BasseNetAmount.
Warning: The NetAmount is
derived from the Order Header
that is Joined to the Items and
Products table. This will lead to
multiple NetAmount records
that will render incorrect
Amounts if summed up. Max
aggregation will calculate only a
single order total value.
8. Select the topmost Aggregation
Node

9. Open the Counter Calculated


Column. Notice the Sales Order
and Category columns are
used for the count distinct

41

DMM360

10. Open the Sale Calculated


Column. Notice the expression

11. Save & Activate!


12. Open 1-5.sql > execute SQL-1

13. Results are shown.

14. Your assignment: Add currency


conversion.
15. Semantics > Parameters >
Notice the predefined Input
Parameter
P_TARGET_CURRENCY that
is used to dynamically show the
desired currency at runtime.
16. Semantics > Columns > Sale >
Click on Open Value Help
Dialog Icon.

42

DMM360

17. Configure Currency Conversion


as follows.
Hint: Within the Value Help
Window change the Type drop
down list for additional
selections:

18. Save & Activate!

19. Execute SQL-2


Notice the target currency
British Pound

20. Results are shown.

21. Optional. Execute SQL-3


Instead of modeling currency
conversion, see how to use
SQL currency conversion
functions directly.

SELECT CONVERT_CURRENCY(amount=>"Sale",
"SOURCE_UNIT" => 'EUR',
"SCHEMA" => 'DMM360',
"TARGET_UNIT" => 'ZAR',
"REFERENCE_DATE" =>"CreatedOn",
"ERROR_HANDLING"=>'set to null',
"CLIENT" => '000') AS "Sale"
FROM "CvBasketAnalysisQuery

43

DMM360

EXERCISE 2-1
DATA MASKING [30 MINUTES]
In this exercise you are tasked to dynamically mask the users Social Security number and/or Zip code.

Total steps (26)


Explanation

Screenshot

1. Expand sol >2-1 > Copy 2-1.sql


and the SAMPLE_ model into
your assigned work area.

2. Rename the model by


removing the SAMPLE_ prefix.
3. Activate the model

4. Open 2-1.sql and execute the


SQL-1 statement
Notice that Masking
configuration table, the SOC
number is flagged to be
masked for your User ID.

5. Execute SQL-2
The procedure determines
when columns should be
flagged

CALL "DMM360"."dmm360.sol.2-1::SpDataMasking"('SOC',?);
// Result = 1
CALL "DMM360"."dmm360.sol.2-1::SpDataMasking"('ZIP',?);
// Result = 0

6. Review the procedure. Expand


sol > 2-1 > Open
SpDataMasking procedure

44

DMM360

7. Open the User Account


Masked Query Calculation
Model in your assigned work
area.
8. Preview the User Account Input
Source

9. Notice both columns are visible


when querying the model.

10. Proceed to create a Input


Parameter > Select Semantics
> Parameters / Variables

11. Create a new Input Parameter

12. Name the Input Parameter


IP_SOC_MASK and derive the
value from a Procedure.

13. Search and select the


SpDataMasking procedure

45

DMM360

14. Repeat the previous steps >


add a second input parameter
called IP_ZIP_MASK (as before
derive the value from the stored
procedure)

15. Select the User Accounts input


source.

16. Create an SQL Calculated


column for each of the required
masked columns.

17. Configure the SocNo


Calculated column as follows:
Name: SocNo
Data Type: VARCHAR 12
Language: SQL
Expression:
map('$$IP_SOC_MASK$$','1',
'000-000-0000', "SOC")
18. Configure the ZipCode
Calculated column as follows:
Name: ZipCode
Data Type: VARCHAR 12
Language: SQL
Expression:
map('$$IP_ZIP_MASK$$','1',
'000-000-0000', "ZIP")
19. Select the Aggregation node
and add the Calculated
Columns to the Output

46

DMM360

20. Select both columns > Right


click > Add to Output

21. Proceed to Map the procedure


input parameters.
Semantic > Parameters >
Manage Mapping

22. In the drop down list select


Procedures/Scalar function for
input parameters

23. Create 2 Constants (ZIP and


SOC)

24. Map the corresponding


columns by dragging the COL
column from the LEFT to the
RIGHT.

25. Save & Active!

26. Execute SQL-3

47

DMM360

EXERCISE 2-2
CUMULATIVE SALES [30 MINUTES]
This exercise showcases sales by month & cumulative slowly changing dimensions. Notice the optical
illusion, even though the Rolling Sales appears to be rising each month the Sales by Month report
exposes the fact that sales are declining each time Granny Smith (Apples) were sold. This solution
uses a helper table for the monthly rolling totals, and temporal join functionality is used for the
changing dimensions. The model also demonstrates how to use the Transparent Filter, Keep flag and
Dynamic Joins.

Total steps (74)


Explanation

Screenshot

1. Expand sol > 2-2 > open


CvSales

2. Review the sample sales


dataset
Select the JoinMonths join node.

48

DMM360

3. In the Details area to the right >


Select the CumulativeSales
table > Right Click > Data
Preview

4. Click on Raw Data > Notice


Total sales for January = $50
Total sales for February = $80

Also notice
Product 1 sales January = $40
Product 2 sales January = $10

5. Strictly informational:
Based on the sample dataset
the expected rolling monthly
sales are shown in the
screenshot.

6. Strictly informational:
Based on the sample dataset
the expected rolling monthly
sales by product are shown in
the screenshot.

49

DMM360

7. Review the helper table that is


used to facilitate the rolling sales
calculations

Select the MonthsAndProducts


table > Right Click > Data
Preview
Notice the concatenated join
Month & Product
8. Sort the Flag column
If you look at the Flag column
and take March as an example,
the Month column contains
Month 1-3; therefore the rolling
sales for March will include the
sales from all 3 months.
The helper table also facilitates
for rolling product sales and so
therefore the product column is
included in the matrix.

9. (Optional) Notice the data


generator utility that is used to
populate the physical helper
table. (Consequently instead of
materializing the matrix, the
data can dynamically be
exploded during runtime using
the function directly)
Strictly informational: When
rolling sales are required
(without the product) - see the
SQL example to generate the
matrix only for rolling months

-- Month Rolling
select C."Month", M."Month" as "Months"
from "DMM360"."dmm360.db::cds.CumulativeSales" C,
"DMM360"."dmm360.db::cds.CumulativeSales" M
where M."Month"<=C."Month" group by C."Month", M."Month"

10. Review the results of the join


between Sales and the helper
table. Right Click on
JoinMonths > Data Preview

50

DMM360

11. Sort the Flag column


Notice February (flag column
(red) value 2) includes both
Month 1 and Month 2 (red
month column).
Consequently January (flag
column 1 orange) includes only
Month 1 (yellow month 1
column)

12.Review the slowly changing


product dimensions.
Select the StarJoin node.

13. Preview the Slow Products


Dimensional Calculation Model.

14. Notice the validity period for


Apples. Second, notice a
different brand of apples was
sold between Feb-April.

15.Review the Temporal Join


between the Facts and the
SlowProduct Dimension.
16.Double click on the Join line.

51

DMM360

17.Temporal Join settings

18.Execute the solution. Expand sol


> 2-2 and copy 2-2.sql into your
assigned work area:

19.Execute SQL-2

20. Expected rolling sales by month

21. Execute the SQL-3 statement,


this includes the Product
dimension

22. Expected rolling sales by month


and product.

52

DMM360

23.Your assignment: Enhance the


model so that the report not only
shows rolling sales but also
sales for each individual month.
A sneak preview is shown of the
model that you need to create. It
includes 2 logical partitioned
branches (Monthly Total and
Rolling Total)

24. Create a Calculation view


within your assigned work area:
Name:
CvCumulativeSalesQuery

25. Add 3 Aggregation nodes and 1


join node to the model

26.Add CvSales as a data source


to the lowest aggregation node.
Note: Select CvSales from the
sol.2-2 package.
27.Rename Aggregation_2 > Sales

53

DMM360

28.Add the following attribute


columns to the output node. Add
the Sale measure as an
Aggregated Column.

29.Select the Flag Column > Within


Properties > Set the Keep Flag
> TRUE
Important! The Keep Flag will
force this column to be retrieved
even though it is not requested.
This will in turn force the Sale
measure to be calculated on the
correct granularity level for
monthly sales (instead of
summing up the sales for rolling
sales due to the helper table
data explosion)

30. Select the aggregation node


above Sales > Rename to
MonthlyTotal.
31. Drag a connection line between
the 2 nodes.

32.Proceed to add all the columns


of MonthlyTotal to the Output
(EXCEPT Flag)
Important: Add the Sale column
as a regular column instead of
an Aggregated Column. The
grouping will remove any
duplicate rows as a result of the
data explosion needed for rolling
sales (essentially un-exploding
the data)
33.Re-arrange the columns
according to the output in the
image.

54

DMM360

34.Rename the third aggregation


node (on the right ) to
RollingTotal
35.As before add the same
CvSales view as the data source

36.Add the columns Year, Flag,


ProductID and add Sale as an
Aggregated column.
37.Rename the Sale measure to
Rolling
38.Re-arrange the columns
according to the Output in the
image.

39. Add a connection line between


MonthlyTotal and Join_1 and
also beteen RollingTotal and
Join_1

40.Ensure MonthlyTotal is on the


left, otherwise right-click and
swap them.
41.Add an (inner) join using the
following columns.
Year -> Year
Month -> Flag
ProductID > ProductID
Add all the Columns from
MonthlyTotal
42.Add only the measure Rolling
from the RollingTotal node.

55

DMM360

43. Final results of the Join node


output structure

44. Drag a connection line from the


Join node to the final
Aggregation node.

45. (Optional) Re-arrange the


layout.

46. Select the topmost Aggregation


node.

56

DMM360

47. Add all the columns to the


output; ensure that both Sale
and Rolling are added as
aggregated columns.

48. Save & Activate!

49.Execute the SQL-4 statement.


Note: Replace # with your
assigned workshop number and
user ID.
50. Expected monthly sales and
rolling sales.

51.Execute the SQL-5 statement.


Notice the additional WHERE
clause

52.Notice the Rolling numbers are


incorrect when the Order Date is
supplied as a filter.

53.Proceed to debug the model.


54.Copy the SQL-5 statement and
start the debugger

57

DMM360

55.When the debugger window


appears delete the SQL. Then
paste in the SQL-5 statement
and execute
Hint: Remove the semi colon [;]
at the end of the SQL statement
56.After the debugger is
instantiated click on the Sales
Node

57.Within the Node Query tab


notice OrderDate is
unnecessarily brought into
context (due to the WHERE
clause) and therefore will alter
the level of granularity adversely
Hint: This can be changed by
flagging the OrderDate as
Transparent - which will remove
the column from the context

58.Next click on the RollingTotal


node

58

DMM360

59.Notice ProductID is
unnecessarily brought into
context. This is a result of the
ProductID being used in the
concatenated join above.
Hint: This can be enhanced by
setting the join to a Dynamic
Join instead.
60.Close the debugger and
proceed to change the model.
61.Double click on any of the Join
lines.

62.Ensure that Dynamic Join is


checked

63.Next > Set the Transparent


Filter for the OrderDate Column.
Important! The Transparent
Flag has to be set on each Node
that references Order Date.
64.Start by selecting the Sales
node and set the Transparency.

65.Select the OrderDate column >


within the Properties set the
Transparent Filter to True.

59

DMM360

66.Repeat the previous step for the


other 3 nodes and ensure that
the Order Date transparent filter
is set.

67.Optional Step
When building Complex Models
the Show Linage (In the
Semantics > Columns) feature
can be used to find column
references easily

Example of OrderDate Lineage.


Shows in which nodes the column
is used

68.Save & Activate!

69.Execute SQL-5 again. The


Rolling numbers are correct.

70. Next execute the SQL-6


statement this includes the
Product ID and Name

60

DMM360

71. Notice that sales for Apples


went down in month 1 and 5

72. Execute SQL-7. Notice the


column Brand is included in the
query.

73. Notice that sales went down


each time that the brand
Granny Smith Apples were sold
(in January and May) compared
to Golden Delicious

74. Finally execute SQL-9. Notice


when Granny Smith apples
were sold (January and May)
sales were down.

61

DMM360

EXERCISE 2-3
DYNAMIC TIME ANALYSIS [20 MINUTES]
This exercise demonstrates the recommended modeling approach to use Table functions (instead of
the traditional Script based Calculation Models) when SQL is required. The model calculates the
current quarter sales and the previous year quarter sales based on a given date prompt. A table
function is used as a data source and will utilize time series functions to dynamically generate
dimensional time data at runtime without having to rely on a physical time/date table.

Total steps (44)


Explanation

Screenshot

1. Expand sol > 2-3 > Open


CvDynamic Time Based
Analysis Query

2. Select the Current Sales node.

3. In the details area in the center


of the screen- select the Series
Facts table> Right Click > Data
Preview

62

DMM360

4. Click the Raw Data tab > Use


May 5th 2014 as an example
this date falls within Q2
Notice: Total sales for Q2 in
2014 was $40 and Q2 in 2013
was $80 dollars.

5. Open 2-3.sql > Highlight the


SQL-1 statement > Execute

6. The expected results are shown


based on 2014-05-04

7. Close the model and sql text file.

8. Your assignment: Copy the


SAMPLE_ model, and the
SAMPLE_ table function
including the 2-3.sql file into
your work area.

63

DMM360

9. Rename the files by removing


the prefix SAMPLE_ from the
model and the table function.

10. Open the table function and


modify the function name.
Uncomment line 2 and then
delete line 1. Replace X with
your workshop number and XX
with your assigned user ID.
Hint: Notice the PERIOD input
parameter (C = Current, P =
Previous Year)
11.Save & Activate

12.Open 2-3.sql > Execute SQL-2


This will generate a record for
each day of the CURRENT
quarter of the INPUT date

-- SQL-2
SELECT * FROM
"DMM360"."dmm360.ws#.##::TfQuarterToDateByYear"
('C','2014-05-04') ;

13.Results are shown. Later in the


exercise you will add the code
for P Previous Year.
Notice: The supplied date of
May 4th falls within the 2nd
quarter, and the first day of the
quarter is April 1 and the last
day is May 4th with a total of 34
days (or 34 rows returned )

14.Within your assigned work area


packages open the Cv Dynamic
Time Based Analysis model.
15.Replace the PreviousYear Data
Source with your Table
Function. Right Click on
SAMPLE_Tf* table function >
Replace with Data Source

64

DMM360

16. Search for TfQuarterToDate


and select the table function
from within your assigned work
area package.
17. Click Next.

18. Leave the mapping as is.


19. Click Finish. This completes the
previous year functionality.

20. Click Save & Activate

21. Proceed to work on the Current


Year functionality. Select the
Previous Sale Node > Right
Click > Copy > All Nodes Below

22. Click past within the work area


to the right of the Previous Sale
join node.

65

DMM360

23. Rename the top node


(CopyOfPreviousSale) to
CurrentSale
24. Rename the bottom node
(CopyOfPreviousYear) to
CurrentYear

25. Select the Current Sale Join


node.

26. Rename the column Previous


Sales to Current Sales

66

DMM360

27. Connect the CurrentSale node


to the Union node.
Hint: Click Auto-Layout.

28. Proceed to model the Union


Node. Click on the Union node
> then click on Auto-Map by
Name

29. Next, select the Aggregation


node.

67

DMM360

30. Add the Current Sales column


as an Aggregated Column

31. Save and Activate!

Important! Due to a minor bug please close the model and


reopen after activation

32. Proceed to map the table


function parameters. Select the
Semantics node

33. In the Parameters/Variables tab


click on the Manage Input
Parameter Mappings icon.

34. Start by dragging the first


IP_DATE parameter from the
LEFT and drop it to the RIGHT
of the screen.

35. Repeat the same step with the


second IP_DATE parameter,
however drop it precisely on the
existing IP_DATE on the right

68

DMM360

36. Create 2 Constants (P & C)

37. Connect the first PERIOD


parameter P (Previous). As
before drag the PERIOD
parameter from the LEFT of the
RIGHT side.
38. Repeat the same step for the C
(Current) constant.

39. Save & Activate!

40.Within sol > 2-3 > Open the


solution table function
TfQuarterToDateByYear and
copy the ELSE SQL statement
(This logic is for the P (Previous
Year)

69

DMM360

41.Add the code to your own table


function created earlier.

42.Save & Activate!

43. Execute the SQL-3 statement


against the Calculation view.

-- SQL-3
SELECT SUM("PreviousSales"), SUM("CurrentSales")
FROM
"_SYS_BIC"."dmm360.ws#.##/CvDynamicTimeBasedAnalys
isQuery"
('PLACEHOLDER' = ('$$IP_DATE$$', '2014-05-04')) ;

44. Result

70

DMM360

EXERCISE 2-4
RESTRICTED SALES REPORT [20 MINUTES]
Your assignment is to implement row level security around a regional sales model and restrict certain
users to USA sales data only. The model is already prepared; however you need to create SQL
Analytical Privileges with the necessary roles and then debug and solve any authorization issues. In
addition you will learn how to create dependent value help lookup prompts.
Power Users

USA Restricted Users

Total steps (53)


Explanation

Screenshot

1. Expand sol > 2-4


2. Copy the following files and
past them into your assigned
work area package.
CvCustomer
CvOrdersQuery
CvProduct
2-4.sql

71

DMM360

3. Activate the 3 models.

4. Open the CvOrdersQuery >


Semantics > View Properties >
Notice SQL Analytic Privileges
are used.
Note: Customer and Product
models are also using SQL
Analytical Privileges

5. Select the Star Join

6. Within the Star Join Replace


the Product Dimension with the
model that you copied into your
work area in the previous step.

7. Select the CvProduct model


within your own work area.

8. Click Next, leave the mapping


as is.
9. Click Finish.

72

DMM360

10. Repeat the previous steps and


replace the Customer
Dimension as well.

11. Search and select the


Customer Dimension from your
own work area.

12. Leave the mapping as is >


Click Finish

13. Since the underlying data


source changed earlier, you
need to redefine the Country
restriction. Edit the Restricted
Column (RS)

14. Re-select the Country Column


to restrict on.
Hint: Activation will fail without
performing this step.

15.Save & Activate!

73

DMM360

16. Create a new SQL Analytical


Privilege and restrict to the
country USA.

17. Name -> ApCountry


Note: Select SQL Analytic
Privilege

18. Within the Secured Models


area click Add

19. Add the Customer dimension


from your own work area.

20. Add an Associated Attribute


Restriction -> Country.

21. Select Country column

22. Click Add to assign the USA


restriction

74

DMM360

23. Add a fixed restricted value ->


USA

24.Save & Activate!

25. Create a new role within your


assigned workspace package.
SAP HANA > Database
Development > New Role
Hint: Select your assigned word
area package > Right Click >
New > Other > then search for
role

26. Name -> CountryAccessRole

75

DMM360

27.Remove all the text and replace


with the following sample role
text.
28.Replace # with your assigned
IDs

role dmm360.ws#.##::CountryAccessRole {
sql object dmm360.ws#.##::CvOrdersQuery :SELECT;
analytic privilege:
dmm360.ws#.##:ApCountry.analyticprivilege;
}

29.Save & Activate

30.Within your assigned work area


> Open > 2-4.sql > Select the
DMM360_BI user Connection.

31.> Execute (SQL-1) (as user


DMM360_BI)
Note: Make sure to replace #
with your assigned IDs
Notice the authorization error

32.Execute (SQL-2) to review the


effective structured privilege
assignments for DMM360_BI

Note: Make sure to replace #


with your assigned IDs

33.Notice that the BI user is not yet


authorized to read any of the
models. We need to assign the
previous created role to the user

34. Execute both statements


underneath (SQL-3)
Using your USER ID execute
the helper procedure that will
assign the previously created
role to the DMM360_BI user

-- SQL-3
CONNECT DMM360_## PASSWORD Welcome16;
CALL "DMM360"."dmm360.db.procs::EXE_24_GrantRoleSelfService"();

76

DMM360

35. Test the Analytical privilege


again> Execute (SQL-4) >
Connect as user DMM360_BI

-- SQL-4
CONNECT DMM360_BI PASSWORD Welcome16;

36.As user (DMM360_BI) execute


(SQL-5) and query CvOrders

Notice the authorization error.

37.Execute (SQL-2) again to review


the effective structured privilege
assignments for DMM360_BI

Note: Make sure to replace #


with your assigned IDs

38.Notice that the BI user is


authorized to read the Customer
model but have no access to the
Product and OrdersQuery

39.Create an SQL Analytical


Privilege > ApProductAndOrders
> Add both CvProduct and
CvOrdersQuery. Hint: These
models do not contain the
Country column and therefore
need their own privilege.
40. Save & Activate!
41. Open CountryAccessRole >
Add the previously created SQL
Analytical Privilege

role dmm360.wsX.XX::CountryAccessRole {
sql object dmm360.wsX.XX::CvOrdersQuery :SELECT;
analytic privilege: dmm360.wsX.XX:ApCountry.analyticprivilege;
analytic privilege: dmm360.wsX.XX:ApProductAndOrders.analyticprivilege;
}

42. Save & Activate


43.Execute the effective structured
privileges statement again.
(SQL-2). Notice that the BI user
now has access to all the
models.
44. Execute (SQL-5) as the
DMM360_BI user
Note: This user only has
access to USA sales data

77

DMM360

45. Optional Exercise: (Dependent


Prompt Lookups)
The City list prompt should only
displays cities based on the
Country selection.
46. Within sol > 2-4 > open
CvCustomerLookUp
Notice the 2 variables (Country
and City) used as filters

47. Within sol > 2-4 > open


CvOrdersUsingValueHelpQuery

48. Select the Semantics node >


Columns

49. Within the Shared area notice


the inherited City and Country
variables

78

DMM360

50. Within the


Parameters/Variables area >
notice the Value Help model
used for each parameter

51. Finally click on the Icon to


review the Input Parameter
Manage Mappings

52. Change the Type Drop down


list selection > Views for value
help for variables
53. Notice that the VAR_Country
variable on the right is mapped
to the VAR_City/VAR_Country
parameter on the left. This will
ensure that the Country
parameter is used when
querying the cities (and ViceVersa)

Final result: Strictly Informational:


When using tools such as Analysis
for Microsoft Excel only cities will
appear in the City prompt lookup
based on the Country selection.

79

DMM360

EXERCISE 2-5
DILBERT HR ORG CHART [30 MINUTES]
This exercise consists of 3 parts; SQL Analytical Privileges, SQL Hierarchies and SQL-Hierarchical
prompts. Your assignment is to create an employee parent child hierarchy and to ensure that the
Margin is calculated correctly at each level of the hierarchy. In addition you will learn how to
incorporate design time roles and how to debug authorization errors.

Total steps (59)


Explanation

Screenshot

1. Expand sol > 2-5


2. Copy both SAMPLE_ models
and the 2-5.sql text file and
paste them into your assigned
work area package.

3. Rename the models by


removing the SAMPLE_ prefix.
4. Activate CvEmployee

5. Open CvEmployee and select


the Employee projection node.

80

DMM360

6. Within the details area > Right


click on Employee > Data
Preview

7. Click on the Raw Data tab >


Employee list is shown
Notice the Valid From and Valid
To columns.

8. Review the Expression Filters


and Key Date input parameter.

9. The Expression Editor is


shown.

10. Open the Key date input


parameter and notice the
default expression

81

DMM360

11.Your assignment starts.


12.Select the Semantics node (of
CvEmployee) > Hierarchies >
create a parent child hierarchy.

13. Name:
Type:
Child:
Parent:

OrgHierarchy
Parent-Child
EmployeeID
ManagerID

14. Enable Not Assigned


Members
Hint: Sales Orders that have
unassigned Sales
Representatives (Employees)
will be categorized as _NA_

15. Save & Activate.


16. Close the CvEmployee model.

17. Open >


CvSalesSQLHierarchyQuery

18. Within the Star Join node


replace the SAMPLE_
CvEmployee model with your
Employee model created in
the previous step.

82

DMM360

19. Select the model > Right Click


> Replace with Data Source

20.Search for CvEmployee and


select the Calculation view
within your assigned work area.
21.Click Next.

22.Click Finish (Leave the


mapping as is)

23. Select the Semantics node >


View Properties > to enable
SQL hierarchies.

24. Click Enable Hierarchies for


SQL access.

25. Click Yes, Save!

83

DMM360

26. Optional. To review the


inherited hierarchy > select
Hierarchies > within the
Shared window double click on
OrgHierarchy

27. Notice the Hierarchy Node


column that can be used via
standard SQL group by
statements.

28. Save. Activate

29.Open 2-5.sql > execute SQL-1


Replace # with your assigned
IDs.
Notice:
GROUP BY OrgHierarchNode

30. Results are shown.


Notice Alices Revenue, Cost
and Margin numbers also
includes her employees (Loud
and Ted) as well.

31. Execute SQL-2


Notice: WHERE
OrgHierarchyNode = Alice
Notice: Alices team individual
contributing numbers

84

DMM360

32. Your assignment: Create


Hierarchy Prompts.
Within the same model
(CvSalesSQLHierarchyQuery)
> Select the Semantic node >
Parameters/Variables

33. Create a new Variable

34. Name: var_employee


Attribute: EmployeeID
Hierarchy: OrgHierarchy

35. Save, Activate, Preview

85

DMM360

36. Expand the hierarchy and


select Wally
Note: Catbert is Unassigned.
An order was placed on Jan 1
referencing Catbert as the
Sales Representative,
however there is no
corresponding record in the
Employee table.

37. Results are shown.

38. Your next assignment > SQL


Analytical Privileges
39. Select the Semantics node >
View Properties > Select SQL
Analytical Privileges

40. Within your assigned work


area > open the CvEmployee
model and set the privileges to
SQL as well.
41. Save & activate both models.

Note: At this point you wil not be able to query the Model; SQL
Analytical priveleges require structured privileges which we will
create next.

86

DMM360

42.Create a new Analytic Privilege


of type SQL called ApSales

43. Within the Secured Models


area click Add > Search for
Sales SQL Hierarchy Query
within your own work area

44. Result

45. Click the SQL Editor and


manually add the SQL filter
expression.
"OrgHierarchyNode" = 'Alice'

46. Save & Activate!


47. Create a new role:
EmployeeAccessRole

48. Delete all the text and replace


with the following text (replace
# with your assigned User ID)

role dmm360.ws#.##::EmployeeAccessRole {
sql object dmm360.ws#.##::CvSalesSQLHierarchyQuery
:SELECT;
analytic privilege: dmm360.ws#.##:ApSales.analyticprivilege;
}

49. Save & Activate!

50. Execute the procedure SQL-3,


this will assign the Analytical
privilege and Role to the BI
front end user.

--SQL-3
CALL "DMM360"."dmm360.db.procs::EXE_25_GrantRoleSelfService"();

87

DMM360

51. Then login as the BI user


(SQL-4)

--SQL-4
CONNECT DMM360_BI PASSWORD Welcome16;

52. Execute (SQL-5) and query


the effective structured
privileges view.

--SQL-5
SELECT USER_NAME, STRUCTURED_PRIVILEGE_STATUS,
ROOT_OBJECT_NAME, OBJECT_NAME, EFFECTIVE_FILTER,
STRUCTURED_PRIVILEGE_FILTER
FROM EFFECTIVE_STRUCTURED_PRIVILEGES WHERE
USER_NAME = 'DMM360_BI' AND
ROOT_SCHEMA_NAME = '_SYS_BIC' AND
ROOT_OBJECT_NAME IN (
'dmm360.ws#.##/CvEmployee',
'dmm360.ws#.##/CvSalesSQLHierarchyQuery'
);

Hint: Replace ## with your


assigned IDs

53. Notice that the Employee


model is not authorized.

54. Proceed to create another


SQL Analytical Privilege
ApEmployee referencing the
Employee model.
Hint: The OrgHierarchyNode
filter column only exists on the
Sales model, therefore the
Employee secured model
requires its own privilege.
55.Add CvEmployee from your
workspace as a secured model.

56.Save & Activate!

57. Edit the previously created


Employee Access Role and
add the analytical privilege

analytic privilege:
dmm360.ws#.##:ApEmployee.analyticprivilege;

88

DMM360

58. Query the effective structured


privileges view again using
SQL-5
Notice that there are not
authorization errors.
59. Execute SQL-6 querying the
model.
Note: Only one record was
returned. The numbers include
Alices own sales, plus her
employees sales (Loud and
Wally)

89

DMM360

EXERCISE 3-1
HOT VS COLD SALES DATA [20 MINUTES]
In this exercise you will use Smart Data Access to build a sales model that automatically queries hot
data that resides in SAP HANA or the cold data that resides in an external database without having to
expose the technical details of where the data is located to end users. Your assignment is to model
the Union and to implement both explicit and implicit input source pruning.

Total steps (49)


Explanation

Screenshot

1. Expand sol > 3-1


2. Copy both models with the
SAMPLE_ prefix including the
3-1.sql file into your work area.

3. Rename the models by


removing the SAMPLE_ prefix.
4. Then open Sales By Year
Comparison Query

5. Within the Cold aggregation


node > Right Click on
CvRemote Orders > Open the
Remote Orders model

90

DMM360

6. Notice that the Remote Orders


table is a virtual table called
ProxyOrders

7. Open the Systems view >


Catalog > DMM360 > tables

8. Notice the proxy orders table is


pointing to a remote source
called HDBCON

9. Open 3-1.sql within your work


area.
10. Execute SQL-1 (Query the
virtual tables and filter by
HDBCON)
11. Note: For simplicity the remote
source connection points to the
same HANA system.

12. Preview the remote Proxy


Orders virtual table. Notice that
historical orders are kept in the
remote database.

91

DMM360

13. Subsequently click on the Local


Orders model (within the Sales
By Year Comparison Model >
Click Open

14. Preview the LocalSales input


source. Notice that the localcurrent year or HOT data
resides in a regular table within
SAP HANA.

15.Start your assignment. Within


the (Sales By Year Comparison)
Calculation view > Select the
Union node.

16. Create a target column

17. Name > SOURCE


Data Type: VARCHAR (5)
18. Click OK

92

DMM360

19. Then go into Manage Mappings


to set the Constant Values
Hint: Right click on Source >
Manage Mappings
20. Add a constant value of IQ for
Cold and HANA for Hot.

21. Select the Aggregation node.

22. Add the SOURCE column to


the Output.

23.Proceed to create an optional


YEAR input parameter. > Click
on the Semantics node.

24. Select Parameters/Variables >


Create an Input Parameter

93

DMM360

25. Name > IP_COLD_YEAR


Expression:
int(component(now(),1))-1
Date Type: Date
Note: If no input value is supplied
the default expression is used
and will calculate the previous
year. The parameter (expression)
can be overridden by supplying a
year.

26. Click on the Cold aggregation


node.

27. Select the Year column > Right


Click > Apply Filter

28. Select the Equal operator and


click on the Apply Filter popup
window icon.

94

DMM360

29. Select the IP_COLD_YEAR


input parameter.

30. Save and Activate!

31.Execute (SQL-2) this will


automatically calculate the
previous year (2015) and the
current year (2016) when NO
filters are supplied. Notice the
DB column as a location
reference.

No
Filter

32. Alternatively you can force the


cold filter only on 2014 data.
Execute (SQL-3)

PLACEHOLDER = (
IP_COLD_YEAR,
2014
)

This will return both 2014 and


2016

33. In the previous examples both


the hot and cold stores were
queried, execute (SQL-4) to
force ONLY IQ execution and
also filter on 2014.

PLACEHOLDER = (
IP_COLD_YEAR, 2014

)
WHERE
"SOURCE" = 'IQ'

34. Alternatively (SQL-5) will force


execution only in the HOT store
for the current year. (2016)

WHERE
"SOURCE" = 'HANA'

35. Your next assignment. The


previous approach required the
user to know where the data
resides. Your assignment is to
add implicit pruning.
The following pruning
configuration table is required
to enable automatic pruning

95

DMM360

36. Execute (SQL-6) and query the


Pruning table. The first 2
columns include the schema
name and the model name
(Hint: Replace # with your
assigned workshop number
and replace ## with your
assigned User ID)
37. The Input column corresponds
to the UNION input source
name.

38. Within your assigned work area


package open the Sales by
Year Using Pruning Node
Query Calculation view

39. Select the Union node


Notice within the Union the
Cold and Hot Input Source
names correspond to the
values in the Pruning
configuration table.

40. Notice the Constant Column


(SOURCE) used in explicit
pruning scenarios are not
needed when using the Pruning
Configuration table.

41.Enable the Pruning


Configuration table. Select
Semantics > View Properties

96

DMM360

42.Within Advanced > Click on the


Pruning Configuration Table
lookup icon and search and
select the Pruning table.

43. Save & Activate!

44. Execute (SQL-7)


WHERE
YEAR > '2015'

45. Execute (SQL-8)


WHERE
YEAR <= '2015'

46. Highlight (SQL-9) > Right Click


> Explain Plan > Execute
Hint: The Year filter should
force only a remote execution.
YEAR < 2016

47. Remote scan is executed

97

DMM360

48. Highlight (SQL-10) > Right


Click > Explain Plan > Execute
Hint: The Year filter forced local
execution.
YEAR > 2015

49. Execute (SQL-11) to review the


SQL statements send to remote
databases.
Note: Replace ## with your
assigned User ID.
Notice the creation of
temporary tables. Under certain
situations it is more efficient to
re-locate smaller datasets and
perform the join on the remote
database.

98

DMM360

EXERCISE 3-2
STATE OF THE UNION SPEECH [15 MINUTES]
In this exercise Text Analysis is used against unstructured data. Your assignment is to analyze State
of the Union speeches and to calculate linguistic differences between US presidents.

Total steps (40)


Explanation

Screenshot

1. Expand sol > 3-2 > Copy


3-2.sql into your work area.

99

DMM360

2. Expand the db > csv folder


Note: If you do you see the db
folder proceed to Check-out the
folder in the Repositories view.
Default workspace > dmm360 >
db > Right-Click > Check-Out.
Then return to the Navigator
view.
3. Open the transcripts_2001.txt
file, this unstructured data is
uploaded into a table.

4. Expand dmm360 > db > Open


the opt.hdbdd CDS file.

5. Notice the textAnalysis CDS


definitions. The physical table
where the text is stored is
called Transcripts and the
automatically indexed table is
called TranscriptsIDX.

6. Strictly informational! Notice the


HTML utility that was used to
upload text documents into the
database table and Blob
Columns.

100

DMM360

7. Strictly informational! Notice


that both transcripts are already
uploaded into the database.
http://
lt5059.wdf.sap.corp:8020/
dmm360/db/xs/ui/upload/
User ID: DMM360_##
Password: Welcome16

8. Within your assigned work area


package open 2-3.sql and
execute SQL-1 querying the
transcripts table.

-- SQL-1
SELECT * FROM
"DMM360"."dmm360.db::opt.Transcripts";

9. Preview the index table using


SQL-2.

-- SQL-2

Note: In your assignment you


will use this table within a
Calculation view.

10. Within your assigned work area


create a new Calculation view
called CvTranscript
11. Add the TranscriptsIDX table to
the aggregation node.

101

DMM360

12. Add the following columns as


attributes to the Output.
Year
TA_TOKEN
TA_TYPE

13. Add a second TA_TOKEN


column but add it as an
Aggregated Column to the
Output.

14.Rename TA_TOKEN_1 to
Counter

102

DMM360

15. Rename the other TA_TOKEN


to Word

16. Save & Activate!

17. Preview the model


18. Add the Word as an attribute
(Label axis) and add Counter
as a measure (Value axis)

103

DMM360

19. Display the results using a Tag


Cloud

20. Results.

21. Create a new Calculation view


named
CvStateOfTheUnionQuery
22. Add 1 Union node and 2
aggregation nodes.

23. From within your own assigned


work area package (wsx.xx)
add CvTranscript to each
Aggregation node input source

24. Rename the Aggregation nodes


separately Bush and Obama

104

DMM360

25. Select the aggregation node


Bush > Add Year, Word, and
TA_TYPE as attributes
26. Add Counter as a measure
(Aggregated Column)
27. Complete the same above 2
steps for the Obama
Aggregation node.

28. Filter the aggregation node


(Bush) Year column -> 2001

29. Repeat the previous step and


Filter the Aggregation Node
(Obama) Year column -> 2009

30. Connect ALL the nodes and


click Auto Arrange
Note: (Auto Arrange)

105

DMM360

31. Within the Union node map all


the columns according to the
image.
Note: Join all the Attributes,
add both counters as separate
measures.

32. Select the topmost Aggregation


node.

33. Add all the attributes to the


output node.
34. Add counters as aggregated
columns.
35. Rename the Counter column to
Bush
36. Rename the Counter_1 column
to Obama
37. Save & Activate!

38. Execute the SQL-3 statement


to see a list of countries
mentioned during the
speeches.
Note: Replace the # with your
assigned IDs.
39. Execute SQL-4 to see which
people were mentioned by
which presidents.
40. Execute the SQL-5 statement
to see a list of most used
words.

106

DMM360

EXERCISE 3-3
UFO SIGHTINGS [45 MINUTES]
In this exercise you will use GEO Spatial capabilities to analyze UFO sightings near US airports and to
predict future sightings. The Models are half done; your assignment is to calculate the amount of
airports within a specific radius of each sighting, you also need to enhance the model to work with
both kilometers and miles. In addition you will finish the predictive model to show how many UFOs to
expect in the future months based on the historical data.

Total steps (106)


Explanation

Screenshot

1. Expand sol > 3-3 > Copy


CvUFOSightingsQuery and
3-3.sql into your work area
2. Activate the model

3. In your assigned work area open


> CvUFOSightingsQuery >
Select the States projection
node. Then Right-Click on the
States data source table > Data
Preview to see each states GEO
spatial metadata.

107

DMM360

4. The GEO spatial shape


representation can be seen by
double clicking on the XML
(Shape) column. (i.e. the islands
of the state of Hawaii is shown)
5. Close the window.

6. Select the Sightings projection


node. Right Click > Data Preview

7. Notice the list of all UFO


sightings across the US in 2008
and 2009.

8. Select the State Sightings join


node.

9. Notice the Join between the


Location column of the sighting
and the Shape column of the
state.

108

DMM360

10. Double click on the Join line


Notice the Join predicate (a
sighting location is Covered By a
state shape)

11. Select the Aggregation node.

12. Note the Sightings counter

13. Open the Counter > Notice the


exception aggregation using the
sighting primary key ID.

109

DMM360

14. Within your assigned work area


> open 3-3.sql > Execute SQL-1
The query lists the 20 cities with
the most UFO activity - Phoenix
Arizona ranked #1 in 2008

15. Proceed to drill down into


Phoenix by executing SQL-2
Notice on April 21st 8 sightings
were recorded in the Phoenix
Your assignment: Focus in on
this day 20080421 and find out
how many airports are in the
vicinity of this sightings including
the distance to the Phoenix
International Airport.

Before starting with the model we can get an idea how far
these sigtings are from the Phoenix International Airport by
manually executing a query and joining the sighing location
on 20080421 with the Phoenix Airport location. See next
step.

16. Execute SQL3


Result: +- 6000 meters or 6
kilometers. (4326 is the identifier
for WSG84 - World Geodetic
System/1984)

17. Strictly informational: Instead of


using the above SQL, You can
also use Spatial Functions from
within Calcuated Columns.
Expand sol > 3-3 > Open
CvUFOSightingsNearAirportsDis
tanceQuery > AirportSightings
Join Node > Calculated Columns

18. Execute SQL-4 to list the


available spatial reference
systems. Filter on 4326 to show
the linear unit of measure for this
spatial reference system.
19. Click on the Navigator view and
expand dmm360 > db > open
opt.hdbdd
Note: If you do you see the db
folder proceed to Check-out the
folder in the Repositories view.

110

DMM360

20. Notice the spatial reference


system 4326 is defined in the
CDS table statement.

21. Your assignment: Enhance the


model by adding airport GEO
location information.
22. First close the
CvUFOSightingsQuery model
23. Then rename the model to
CvUFOSightingsNearAirportsQu
ery.calculationview
24. Open the model and drag a Join
node unto the connection line
between the State sightings and
the aggregation node.
25. Click Yes.

26. Click Auto Layout


Hint:

27. Add the cds.Airports GEO spatial


table to the Join node.

111

DMM360

28.Join the Location and the Shape


column.
29.Add the AirportTx, LocID and
Name columns (from Airports) to
the output

30.Within output - create an input


parameter.
This will act as a radius filter in
order to find out how many
airports are located within the
vicinity of a sighting.

31. Name: IP_DISTANCE


Type: DECIMAL (12,2)

32. Finish the Join. Double Click on


the Join line between the State
Sightings table and the Airport
table.

33. Select the Within Distance


Predicate.

112

DMM360

34. Instead of hard-coding the


Distance value, select the
previously created input
parameter.
35. This will ensure that you can
dynamically query sightings and
probe for any close-by airports.
Select IP_DISTANCE under
Input Parameters

36. Completed input parameter


setting.

37. Next, select the Aggregation


node

38. Add the 3 airport columns to the


output

39. Ensure the column AirportTx is


defined as an attribute (if its a
measure change the type in the
semantic node and then return to
this step)
40. Create a new Counter

41.Name > Airports


42.Select the AirportTx column

113

DMM360

43. Save and Activate.

44. Execute SQL-5. The Phoenix


airport is 6088 meters from the
sighting, therefore the filter
(6089) should return the airport.
45. Your next assignment. Work with
both Miles and Kilometers.
Expand sol>3-3 > open
UFOSightingsNearAirportsUsing
MilesorKilometers

46. Open the Input Parameters

47. Review the IP_MILES input


parameter

48. Open the IP_KILOMETERS


Input Parameter
Notice this parameter value is
derived from a procedure

49. Expand dmm360 > sol > 3-3


Open SpConversionUtil

50. The scalar procedure converts


miles to kilometers

PROCEDURE "DMM360"."dmm360.sol.33::SpConversionUtil" (
IN miles STRING, OUT kilometers STRING)
LANGUAGE SQLSCRIPT SQL SECURITY INVOKER
DEFAULT SCHEMA DMM360 READS SQL DATA AS
BEGIN
kilometers := :miles * 1.60934;
END;

114

DMM360

51. Next review the join between the


sightings and the airports. Select
the AirportSightings Join node.

52. Double click on the Join line

53. Notice that the kilometers input


parameter is used for the
Distance predicate.

54. Finally click on Input Parameter


Managed mappings.

55. Notice if the IP_MILES


parameter is used then the value
will first be converted using the
procedure. Alternatively if
IP_KILOMITORS is used as an
input parameter then the value is
directly assigned and will bypass
the conversion
56. Execute SQL-6 using the
IP_MILES input parameter. This
will call the stored procedure to
do the conversion to kilometers.

('PLACEHOLDER' = ('$$IP_MILES$$', '3784'))


WHERE "City" = 'Phoenix' AND
"Year" = '2008' AND
"Date" = '20080421'

115

DMM360

57. Results are shown when filtering


using Miles

58. Execute SQL-7 using the


IP_KILOMETERS input
parameter. This will bypass the
stored procedure.

('PLACEHOLDER' = ('$$IP_KILOMETERS$$', '6089'))


WHERE "City" = 'Phoenix' AND
"Year" = '2008' AND
"Date" = '20080421'

59. The same results are shown


when filtering using Kilometers

60. Your next assignment.

Predict how many UFOs can be expected in future.

61. Execute SQL-8 query.


Note: The results (Month, Year,
Sighing count) will be used as
the input to the predictive
algorithm.

62. Execute SQL-9 to see the control


settings for this particular
algorithm that we plan to use.
Note: The control settings
influences the predictability. This
can either be read from a table
or passed in as parameters.

63. Witin the sol>3-3 > Open the


Predictive Flow Diagram this is
where you will model the
predictive expenential smoothing
algorithm.
Note: The flowgraph is the
design time object, upon
activation several runtime
objects (stored procedures) are
generated.

116

DMM360

64. Notice the Input Data, Control


Data and Output areas, including
the Single Exponential
Smoothing algorithm.

65. Select the Input Data

66. Within the properties view notice


the catalog object points to the
Calculation view shown earlier.

67. Select the control data.

117

DMM360

68. The input control table points to


the Catalog Object
AFL_UFOControl

69. Notice the algorithms input and


output parameters

70. Algorithm Input parameter details

Note: ID maps to the Month


column and RAW_DATA maps
to the number of historical
sightings for that month.

71. Select Out.

72. Click on Signature.

Note: ID maps to the future


Month and the predicted column
is the actual number of predicted
UFO sightings for that month in
the future.

118

DMM360

73. Execute the procedure. In the


Top Right Corner click on the
down arrow.

74. Future monthly predictions are


show.

75. Take a sneak peek into the 2


generated stored procedures
within the Catalog > DMM360 >
Procedures
76. Pay attention to the
FgSingleExSmoothing
procedure. We will enhance this
procedure in the next steps.

77. Open the FgSingleExSmoothing


stored procedure and notice the
generated code. Both the input
data and the control data are
read into a variable and then
passed into the
SINGLESMOOTHING algorithm
for processing.

78. Important! In order to use the


predictive flow-graph procedure
from within a Graphical
Calculation view we will wrap a
Table function around the
predictive code.
79. Within the Navigator view >
Open the Single exponential
Smoothing table function.

119

DMM360

80. Notice that the control data are


dynamically generated during
runtime instead of reading it from
a table. In addition some of the
controls are exposed via input
parameters allowing client
applications to manipulate the
predictive behavior at runtime.

81. Arrays are used to dynamically


create the control structure at
runtime, (instead of reading the
control data from a table)

82. Notice the IP_YEAR filter


enhancement; this will ensure
that the year filter is applied first

83. Execute SQL-10 using the


following parameters.
Year: 2009
Alpha: 0.06
Delta: 0.01

84. Your assignment: Create a new


Calculation view
CvUFOSightingsPredictiveQuery
that reads the table function
85. Add a aggregation node and use
the table function
(TfSingleExSmoothing) as
source

120

DMM360

86. Add a join node in order to map


the current sighting months to
future sighting months.

87. Add the CvUFOSightings


PredictiveDS (from sol>3-3 to
the Join node.

88. Connect all the nodes

89. Select the Aggregation node at


the bottom.

121

DMM360

90.Add the columns month and


year as attributes to the Output.
Add the column Predicted as a
aggregated column to the
Output

91. Select the Join node

92. Optional. If the tables are


swapped, move the
Aggregation node to the left.

93. Drag join lines between


Month -> ID
Year -> Year

94.From the Aggregation node add


the Month, Year and Predicted
columns to the output
95.Add only one column Sightings
from the Sightings data source
to the output

122

DMM360

96. Select the topmost Aggregation


node.

97. Add Month and Year as


attributes to the Output and add
Predicted and Sightings as
measures to the Output.

98. Click on Semantics > Columns


> and define Year & Month as
Attributes
99. Click Save

100.
Work on the input
parameters next. Within the
Semantics node > Click on
Parameter mappings.

101.
Select Data Sources from
the type drop down list > Drag
all the input parameters
individually from the left side to
the right side.
102.

Activate!

123

DMM360

103.

Preview

104.
When prompted enter the
following parameter values
IP_YEAR: 2009
IP_ALPHA: 0.06
IP_DELTA: 0.01

105.
Within preview > Analysis >
add the Month, Sightings and
Predicted columns

106.
Change the chart > choose
combo bar-line chart.
Notice: The bars are the
historic sightings and the lines
are the predicted sightings for
each future month.

124

DMM360

EXERCISE 3-4
FIFO INVENTORY [30 MINUTES]
This exercise demonstrates the classical First-In, First-Out (FIFO) method that is used to calculate the
value of inventory on hand at the end of an accounting period and the cost of goods sold during the
period. This method assumes that inventory purchased or manufactured first is sold first and newer
inventory remains unsold. As seen in the example on Feb 15th, 5 apples were sold, of which 3 were
purchased at 10.5 (carry over from Jan 15th sale). The remaining 2 apples were purchased at 13.25.

Total steps (23)


Explanation

Screenshot

1. Expand sol > 3-4 > copy 3-4.sql


into your assigned work area.

125

DMM360

2. Open 3-4.sql > execute (SQL-1)


FifoPurchases
Notice: On January 1 (10)
apples were purchased @10.5

3. Execute (SQL-1) FifoSales


Notice: On January 15th (7)
apples were sold resulting in 3
remaining apples @10.5 that will
be used to fulfil the Order on
Feb 15th.

4. Execute (SQL-2)
The FIFO method requires
calculations on item level, one
solution is to explode the data.
As a first step we need to find
the maximum Quantity
Purchased.
5. Execute (SQL-3)
Based on the results from the
previous step we can create a
dynamic temporarily (tally) table
at runtime using Series data
functions.

126

DMM360

6. Execute (SQL-4)
The following SQL will explode
the data (as a result of the join
between Purchases and Tally)
i.e. Purchase ID (1) contains a
quantity of 10, this results in 10
records

7. Proceed to review the table


functions. Expand sol > 3-4 >
(Notice the table functions for
Purchases and Sales)

8. Open FifoPurchases
Review the equivalent SQL
Script

9. Execute (SQL-5) FifoPurchases


Note: On January 1 (10) Apples
were purchased, however only
the first 7 apples were allocated
for Sales Order (1). The
remaining 3 items will be used in
Sales Order (2) see next step.

127

DMM360

10.Execute (SQL-5) FifoSales. The


same approach was used to
explode the Sales Data.
Note: Sales Order 2 (Sequence
8-10) maps to Sequence 8-10
from the previous step
(purchases) therefore the cost
of those 3 items are 10.5

11.Execute (SQL-6)
Note: The Join between
Purchases and Sales on (Item
and Sequence)

12.Your assignment: Create a


Calculation Model based on the
SQL statement from SQL-6.
Name:
CvGrossProfitByItemUsingFifoQ
uery
13.Add a Join Node and add both
table functions as data sources.

14.Join on Item Number and Seq


15.From Purchases add
ItemNumber, Cost and Seq to
the output. From Sales add
Price to the Output

16.Connect the Join node to the


Aggregation Node > Then select
the Aggregation Node.

128

DMM360

17.Add Item Number as an


Attribute, add Cost, Seq, and
Price as Aggregated Columns.
18.Rename 2 columns:
Seq -> Sold
Price -> Sale
19.Set the Aggregation to Count for
the Sold Column.

20.Re-order the Columns according


to the image.

21.Create a new Calculated


Column called Profit
Data Type: Decimal (12,2)
Type:
Measure
Language: SQL
Syntax:

"Sale" - "Cost"

22.Save & Activate


23.Execute (SQL-7)

129

DMM360

EXERCISE 3-5
NON CUMULATIVE MEASURES [30 MINUTES]
Non-cumulative key figures are calculated based on other key figures and characteristics, values are
not stored but are calculated during runtime. Non-cumulative key figures are used in applications
where users want to know the daily stock level or account balance.
In the exercise example the Daily Balance calculation depends on the previous days Daily Balance
calculation and therefore needs to be carried over to the next day.
Source Data

End of Day

The solution consists of 2 steps:


1. Explode the data using a cross join
2. Calculate daily running totals & balances
Total steps (37)
Explanation

Screenshot

1. Open > sol > 3-5 >


CvStockBalanceQuery

2. Select the Join node

3. Notice the Cross Join (absence


of the physical Join) that will
explode the data.
Important: With even moderate
sized tables this can quickly
produce very large result sets
that take-up large amounts of
memory that can adversely
impact performance.

130

DMM360

4. Join node > Right Click >


Preview
Notice: Each day consists of 7
records (total transaction days)

5. Select the Filter > Notice the


Expression that will deexploded the data.

6. Filter> Right Click > Preview


Notice the entries for 20160504
and specifically (50 Bananas)
this amount is carried over from
20160502. In the next step both
entries will get summed up
(resulting in 90), subtracting 15
outflows would give a correct
daily balance of 75 for that day.
7. Select the Balance node.

8. Notice the Group By (Date &


Material) and the Calculated
Column Balance (Inflow
Outflow)

9. Right click Balance > Data


Preview

131

DMM360

10. Result: Notice the Daily


Balance is correct, however
due to the data explosion the
daily Inflow and Outflow
granularity level was lost. The
original values can be obtained
by joining back to the original
data-set.

11. Select the Final Join node

12. Notice the Join to the original


data set with the correct Inflow
and Outflow amounts per day.

13. Preview the Model, the correct


results are shown.
Note: For your next assignment
you need to display daily
balances for both Apples and
Bananas (i.e. 20160502 also
needs to display Apples with a
carryover balance of 25 from
the previous day).
14. Your assignment results: Notice
each day has an entry for both
Apples and bananas.

15. Create a new Calculation


Model called CvExplodeStock
16. Add 2 aggregation nodes > add
the table NonCumulativeSales
to each node.

132

DMM360

17. Add Date from Aggregation_1


to the output

18. Add Material from


Aggregation_2 to the output

19. Add a Join node and connect


the 2 aggregation nodes.
20. Add both columns to the output
(do not add any join lines)
21. Add another Join node and add
again the NonCumulativeSales
table. Then drag a connection
line between both join nodes.
22. Click Join_2 and ensure that
Join_1 is the LEFT node.
(Swap them if necessary)

23. Add Date, Material from Join_1


and Inflow and Outflow to the
output
24. Add Left Outer Join using Date
& Material.

25. Connect the join node and the


topmost aggregation node.
26. Select the topmost Aggregation
node > add Date and Material
as attributes and add Inflow
and Outflow as Measures.

27. Save, Activate, Preview.

133

DMM360

28. Result. Each day has an entry


for both Apples and Bananas

29. Incorporate the model. Expand


sol > 3-5 > Copy
CvStockBalanceQuery into
your assigned work area

30. Replace both Data Sources


with the model you created in
the previous step.
31. Right click > Replace Data
Source
32. Select CvExplodeStock in your
packages.

33. Leave the mapping as is.


34. Repeat and replace the other
input source as well.

35. Save & Activate


36. Expand sol > 3-5 > Copy
3-5.sql to your assigned work
area > Open
37. Execute (SQL-2)

2016 SAP SE or an SAP affiliate company. All rights reserved.


No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP SE or an SAP
affiliate company. SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered
trademarks of SAP SE (or an SAP affiliate company) in Germany and other countries. Please see http://www.sap.com/corporateen/legal/copyright/index.epx#trademark for additional trademark information and notices.

134

You might also like