You are on page 1of 234

Oracle Data Integrator

10.1.3 Lesson

Rev 2.2

03/03/2008

Authors FX Nicolas Christophe Dupupet Craig Stewart Main Contributors/Reviewers Nick Malfroy Julien Testut Matt Dahlman Richard Soule Bryan Wise Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 USA Worldwide inquiries: Phone: +1 650 506 7000 Fax: +1 650 506 7200 www.oracle.com Oracle is the information company Oracle is a registered trademark of Oracle Corporation. Various product and service names referenced herein may be trademarks of Oracle Corporation. All other product and service names mentioned may be trademarks of their respective owners. Copyright 2008 Oracle Corporation All rights reserved.

Rev 2.2

03/03/2008

Oracle Data Integrator Workshop

Introduction

Objectives
After completing this training, you should:
Have a clear understanding of the ODI architecture Have a clear understanding of the ODI differentiators Have some experience in developing with ODI Be ready for your first live projects

Before We Start
Please copy and unzip the VM ware image on your machine

Lessons
General Information
Overview of the product, sales tactics, positioning Architecture

ODI: The Extensibility Framework


Knowledge Modules CDC

Packaging and enhancing the ETL processes


Workflow management Metadata Navigator Web Services User Functions, Variables and advanced mappings ODI Procedures, Advanced Workflow

A day in the life of an ETL Developer


Designer Simple Transformations Designer Transformations for heterogeneous sources (databases, flat files) Designer Introduction to Metadata and XML Designer Data Integrity Control

Administrative tasks in ODI


Installation Agents configuration

Understanding the Metadata and the Databases Connectivity


Metadata Topology

Additional Features
Data Profiling Data Quality Versioning

Methodology
Install the GUI Create the repositories..... Define Users and Profiles. Define the IS architecture Physical and logical view. Reverse-engineering of the meta-data Table, Views, Synonyms definitions Constraints. Definition of the elementary transformations Which are the targets? Which are the sources for each target? Define transformation rules and control rules Define the transfer rules... Unitary tests Understand the outcome Debugging . Optimize stategies Knowledge Modules.. Define the sequencing Order the interfaces Integration tests Scenarios generation Defining the scheduling Agents configuration Execution frequency. Packaging / delivery Freeze the version Deliver the scenarios Operations..

Install Security Topology Designer Designer Model Def. Project/ Interface Operator Project/KM Project/Pkg Agents/Scen

Project/Scen.

Operator

Oracle Data Integrator Overview

1
1-1

Objectives
After completing this lesson, you should be able to describe:
The scope of Data Integration for batch and near real time integration The difference between ODI ELT and other ETL tools on the market for batch approaches General overview of the ODI architecture, and how it combines ELT and SOA in the same product architecture

1-2

Why Data Integration?


NEED
Information How and Where you Want It
Business Intelligence Corporate Performance Management Business Process Management Business Activity Monitoring

Data Integration
Migration Data Warehousing Master Data Management Data Synchronization --------Federation Real Time Messaging

HAVE
Data in Disparate Sources
---

---------------

---------------

---------------

Legacy

ERP

CRM

Best-of-breed Applications

1-3

Challenges & Emerging Solutions


In Data Integration
CHALLENGE EMERGING SOLUTION

1.

Increasing data volumes; decreasing batch windows Non-integrated integration

Shift from E-T-L to E-LT

2.

Convergence of integration solutions Shift from custom coding to declarative design Shift to pattern-driven development

3.

Complexity, manual effort of conventional ETL design Lack of knowledge capture

4.

1-4

Oracle Data Integrator Architecture Overview

1-5

Oracle Data Integrator Architecture

Service Interfaces and Developer APIs Design-Time


User Interfaces
Designer Operator Thin Client

Java design-time environment


Runs on any platform Thin client for browsing Metadata
Agent Data Flow Conductor

Runtime

Data Flow Generator Knowledge Module Interpreter

Data Flow Generator Runtime Session Interpreter

Java runtime environment


Runs on any platform Orchestrates the execution of data flows

Knowledge Modules

Data Flow

Metadata repository
Pluggable on many RDBMS Ready for deployment Modular and extensible metadata

Metadata Management
Master Repository Work Repositories Runtime Repositories

1-6

ODI Detailed Architecture


Development
ODI Design-Time Environment User Interfaces Topology/Security Administrators Design-time Metadata/Rules Repositories Designers
ESB Files / XML

Development Servers and Applications Execution Agent Data Flow Conductor Return Codes
CRM Data Warehouse

Code Execution Log

Legacy ERP

Production

Scenarios and Projects Releases

ODI Runtime Environment User Interfaces Topology/Security Administrators Execution Log Operators Runtime Repository Code Execution Log Agent Data Flow Conductor Execution Return Codes

Production Servers and Applications

CRM

Data Warehouse

Legacy

Thin Client Metadata Lineage Data Stewarts Metadata Navigator


ESB

ERP Files / XML

1-7

Oracle Data Integrator


Data Movement and Transformation from Multiple Sources to Heterogeneous Targets

BENEFITS

KEY DIFFERENTIATED FEATURES

1. 2. 3. 4.

Performance: Flexibility: Productivity: Hot-Pluggable:

Heterogeneous E-LT Active Integration Platform Declarative Design Knowledge Modules

1-8

1 1

Differentiator: E-LT Architecture


High Performance
Conventional ETL Architecture

Transform in Separate ETL Server


Proprietary Engine Poor Performance High Costs IBM & Informaticas approach

Extract

Transform

Load

Transform in Existing RDBMS


Leverage Resources Efficient High Performance

Next Generation Architecture

E-LT
Transform Transform Extract Load

Benefits
Optimal Performance & Scalability Easier to Manage & Lower Cost

1-9

2 2

Differentiator: Active Integration


Batch, Event-based, and Service-oriented Integration
Oracle Data Integrator

Evolve from Batch to Near Realtime Warehousing on Common Platform Unify the Silos of Data Integration Data Integrity on the Fly Services Plug into Oracle SOA Suite

Event Conductor Event-oriented Integration

Service Conductor Service-oriented Integration

Metadata Declarative Design

Data-oriented Integration Data Conductor

Benefits Enables real-time data warehousing and operational data hubs Services plug into Oracle SOA Suite for comprehensive integration

1-10

3 3

Differentiator: Declarative Design


Developer Productivity
Conventional ETL Design

Specify ETL Data Flow Graph


Developer must define every step of Complex ETL Flow Logic Traditional approach requires specialized ETL skills And significant development and maintenance efforts

Declarative Set-based Design


Simplifies the number of steps Automatically generates the Data Flow whatever the sources and target DB ODI Declarative Design 1
Define

Benefits
Significantly reduce the learning curve Shorter implementation times Streamline access to non-IT pros

Automatically Generate What Dataflow You Want

Define How: Built-in Templates

1-11

4 4

Differentiator: Knowledge Modules


Hot-Pluggable: Modular, Flexible, Extensible
Reverse Engineer Metadata Journalize Read from CDC Source Load From Sources to Staging Check Constraints before Load Integrate Transform and Move to Targets Service Expose Data and Transformation Services
WS WS WS

Pluggable Knowledge Modules Architecture

Reverse

Staging Tables

Load CDC
Sources Journalize

Integrate Check
Error Tables Target Tables

Services

Sample out-of-the-box Knowledge Modules


SAP/R3 Siebel Log Miner SQL Server Triggers Oracle DBLink JMS Queues Check MS Excel TPump/ Multiload Oracle Merge Siebel EIM Schema Oracle Web Services

DB2 Journals

DB2 Exp/Imp

Oracle SQL*Loader

Check Sybase

Type II SCD

DB2 Web Services

Benefits Tailor to existing best practices Ease administration work Reduce cost of ownership

1-12

Oracle Data Integrator General Overview

1-13

Overview: 6 steps to Production


1. 2. 3. Retrieve/Enrich metadata Design transformations Orchestrate data flows
Development
Development Servers and Applications

4. 5. 6.

Generate/Deploy data flows Monitor executions Analyze impact / data lineage


Production
Production Servers and Applications

CRM

Data Warehouse

CRM

Data Warehouse

Legacy

Legacy

ERP ESB Files / XML ESB Files / XML

ERP

ODI Design-Time Environment


User Interfaces Administrators Designers Design-time Design-time Repositories Repositories Agent Data Flow Conductor Runtime Repository

ODI Runtime Environment


Agent Data Flow Conductor User Interfaces Operator Metadata Navigator

1-14

Extended Capabilities

1-15

Extended Capabilities
Master Data Management enabled
Common Format Designer Automated generation of canonical format and transformations Built-in Data Integrity

Real-time enabled
Changed Data Capture Message Oriented Integration (JMS)

SOA enabled
Generation of Data Services Generation of Transformation Services

Extensibility
Knowledge Modules Framework Scripting Languages Open Tools

1-16

Use Cases

1-17

E-LT for Data Warehouse


Create Data Warehouse for Business Intelligence Populate Warehouse with High Performance ODI
Heterogeneous sources and targets Incremental load Slowly changing dimensions Data integrity and consistency Changed data capture Data lineage

Load Transform Capture Changes

Incremental Update Data Integrity

Aggregate Export

Cube

Operational

Analytics

-------------

Data Warehouse

Cube

Cube

Metadata

1-18

SOA Initiative
Establish Messaging Architecture for Integration Incorporate Efficient Bulk Data Processing with ODI

Generate Data Services Expose Transformation Services

Deploy and reuse Services

Services

Business Processes

Data Access -------------

Transformation

Invoke external services for data integration Deploy data services Deploy transformation services Integrate data and transformation services in your SOA infrastructure

Operational

Others

Metadata

1-19

Master Data Management


Create Single View of the Truth Synchronize Data with ODI
Use in conjunction with packaged MDM solution Use as infrastructure for designing your own hub Create declarative data flows Capture changes (CDC) Reconcile and cleanse the data Publish and share master data Extend metadata definitions

Change Data Capture Master Data Load

Canonical Format Design Cleansing and Reconciliation

Master Data Publishing

CDC CDC -------------

Master Data

CDC -------------

Metadata

1-20

10

Migration
Upgrade Applications or Migrate to New Schema Move Bulk Data Once and Keep in Sync with ODI

Initial bulk load CDC for synchronization

Transformation to new application format

CDC for loopback synchronization

CDC

-------------

CDC

Bulk-load historical data to new application Transform source format to target Synchronize new and old applications during overlap time Capture changes in a bidirectional way (CDC)

Old Applications
Other Sources

New Application

Metadata

1-21

ODI Enhances Oracle BI


Populate Warehouse with High Performance ODI
Oracle BI Suite EE
Answers Interactive Dashboards Publisher Delivers

Oracle Business Intelligence Suite EE:


Simplified Business Model View Advanced Calculation & Integration Engine Intelligent Request Generation Optimized Data Access

Oracle BI Presentation Server Oracle BI Server

Oracle BI Enterprise Data Warehouse

Bulk E-LT Oracle Data Integrator


E-LT Agent E-LT Metadata

Oracle Data Integrator:


Populate Enterprise Data Warehouse Optimized Performance for Load and Transform Extensible Pre-packaged E-LT Content
Siebel CRM

SAP/R3

PeopleSoft

Oracle EBS

1-22

11

ODI Enhances Oracle SOA Suite


Add Bulk Data Transformation to BPEL Process
Oracle SOA Suite
BPEL Process Manager
Business Activity Monitoring Web Services Manager Descriptive Rules Engine Enterprise Service Bus

Oracle SOA Suite:


BPEL Process Manager for Business Process Orchestration

Oracle Data Integrator


E-LT Agent E-LT Metadata

Oracle Data Integrator:


Efficient Bulk Data Processing as Part of Business Process Interact via Data Services and Transformation Services

Bulk Data Processing

1-23

ODI Enhances Oracle SOA Suite


Populate BAM Active Data Cache Efficiently
Oracle SOA Suite
Business Activity Monitoring
Event Monitoring Web Applications Event Engine Report Cache Descriptive Rules Engine BPEL Process Manager Web Services Manager

Oracle SOA Suite:


Business Activity Monitoring for Real-time Insight

Active Data Cache

Enterprise Service Bus

Oracle Data Integrator:


Oracle Data Integrator
Bulk and Real-Time Data Processing Agent Metadata

High Performance Loading of BAMs Active Data Cache Pre-built and Integrated

Data Warehouse SAP/R3

CDC
PeopleSoft

Message Queues

1-24

12

Links and References


IAS (Internal):
http://ias.us.oracle.com/portal/page?_pageid=33,1704614&_dad=portal&_schema=PORTAL

OTN (external):
http://otn.oracle.com/goto/odi

Product Management Support:


ORACLEDI-PM_US@oracle.com

Field support:
ORACLEDI-COMMUNITY_WW@oracle.com

Forum:
http://forums.oracle.com/forums/forum.jspa?forumID=374&start=0

KMs:
http://odi.fr.oracle.com

Product Management Wiki:


http://aseng-wiki.us.oracle.com/asengwiki/display/ASPMODI/Oracle+Data+Integrator+Product+Management

1-25

Lesson summary

Data Integration Data Integration Challenges Challenges Market Market Positioning of Positioning of ODI ODI

Key Key Differentiators Differentiators

1-26

13

1-27

14

Oracle Data Integrator Architecture

2
2-1

Objectives
After completing this lesson, you should:
Know the different components of the ODI architecture Understand the structure of the Repositories

2-2

Components

2-3

Graphical Modules

Designer Reverse-Engineer Develop Projects Release Scenarios Java - Any Platform Any ISO-92 RDBMS

Operator Operate production Monitor sessions

Topology Manager Define the infrastructure of the IS

Security Manager Manage user privileges

Repository

2-4

Run-Time Components
Designer Reverse-Engineer Develop Projects Release Scenarios Java - Any Platform Operator Operate production Monitor sessions
Monitor sessions View Reports

Submit Jobs

Repository

Any ISO-92 RDBMS Scheduler Agent Handles schedules Orchestrate sessions Java - Any Platform

Read sessions Write reports

Lightweight Distributed Architecture

Return Code

Execute Jobs

Information System

2-5

Metadata Navigator
Any Web Browser Browse metadata lineage Operate production

Repository

Any ISO-92 RDBMS Scheduler Agent Handles schedules Orchestrate sessions Java - Any Platform Metadata Navigator Web access to the repository J2EE Application Server

Submit Executions

Return Code

Execute Jobs

Information System

2-6

SOA
Designer Generate and deploy Web Services

Repository

Any ISO-92 RDBMS Scheduler Agent Handles schedules Orchestrate sessions Java - Any Platform Tomcat / OC4J Web Services presentation J2EE Application Server

Exposes Scenarios for Executions

Return Code

Execute Jobs

Information System

and Data oses Data Exp anged Ch

2-7

Components: a Global View


Designer Reverse-Engineer Develop Projects Release Scenarios Java - Any Platform Operator Operate production Monitor sessions Topology Manager Define the IS infrastructure Security Manager Manage user privileges Any Web Browser Browse metadata lineage Operate production

Repository

Any ISO-92 RDBMS Scheduler Agent Handles schedules Orchestrate sessions Java - Any Platform Information System Repository Access HTTP Connection Execution Query Metadata Navigator Web access to the repository J2EE Application Server

2-8

ODI Repositories

2-9

Master and Work Repositories


Security Topology Versioning Master Repository

Models Projects Execution Work Repository (Development) Execution Execution Repository (Production)

Two type of Repositories: Master and Work Work Repositories are always attached to a Master Repository

2 - 10

Example of a Repository Set-Up


Security Topology Versioning
Create and archive versions of models, projects and scenarios Import released and tested versions of scenarios for production

Master Repository

Import released versions of models, projects and scenarios for testing

Models Projects Execution Work Repository (Development) Models Projects Execution Work Repository (Test & QA)

Execution Execution Repository (Production)

Development Test Production Cycle 2 - 11

Lesson summary

Structure Structure of the of the Repository Repository

Components Components of the of the Architecture Architecture

2 - 12

2 - 13

Oracle Data Integrator First Project Simple Transformations: One source, one target

3
3-1

Objectives
After completing this lesson, you will know how to:
Create a first, basic interface Create a filter Select a Knowledge Module and set the options Understand the generated code in the Operator Interface

3-2

Anatomy of ODI Transformations

3-3

Quick Overview of Designer

Toolbar

Workspace Object Tree

Selection Panel

Metadata

Project

3-4

Terminology
ETL/ELT projects are designed in the Designer tool Transformations in ODI are defined in objects called Interfaces. Interfaces are stored into Projects Interfaces are sequenced in a Package that will be ultimately compiled into a Scenario for production execution

3-5

Interface
An Interface will define
Where the data are sent to (the Target) Where the data are coming from (the Sources) How the data are transformed from the Source format to the target format (the Mappings) How the data are physically transferred from the sources to the target (the data Flow)

Source and target are defined using Metadata imported from the databases and other systems Mappings are expressed in SQL Flows are defined in Templates called Knowledge Modules (KMs)

3-6

Creating, Naming a New Interface

Interfaces are created in Projects To create any object in ODI, right-click on the parent node and select Insert xyz This is true for interfaces as well: On the projects Interfaces entry, select Right-Click/Insert Interface.

3-7

Interfaces: The Diagram

3-8

Selection of Sources and Target


Drag and drop the Metadata from the tree into the interface to make these sources or targets
Source Tables Target Table (single target)

Metadata

3-9

Automatic Mappings

Automatic Mapping creates mappings by matching column names automatically. ODI will prompt you before doing so: you have the option to disable this feature.

3-10

Mappings in the Interface


Target Columns (click here to open the mapping field)

Mapping expressions (read only)

Type or edit your mapping expressions here Expression Editor button

3-11

Using the Expression Editor


1. Click the expression editor button ( ) in the mapping window 2. Build your SQL expressions from the SQL help at the bottom, and from the Columns at the left

3-12

Note
y ce onl i nt er f a An t es a pop ul a r get ta single ore. dat as t ul at e T o p o p a r g e t s, lt se ve r a everal need s yo u ces. i nt er f a
3-13

Valid Mapping Types


The following type of clauses may be used in the mappings:
Value Source Column DBMS Function DBMS Aggregate Combination String values should be enclosed in single quotes: SQL', '5 but 10.3
Drag and drop the column or use the expression editor. It is prefixed by the datastores alias. E.g.: SRC_SALES.PROD_ID

Use the expression editor for the list of supported functions and operators MAX(), MIN(), etc. ODI automatically generates the GROUP BY clause.
Any combination of clauses is allowed: SRC_SALES_PERSON.FIRST_NAME || ' ' || UCASE(SRC_SALES_PERSON.LAST_NAME)

3-14

Filtering Data
Drag and drop a column on the background area Then type the filter expression

Check expression. SQL filter expression Execution location Expression editor Save expression

3-15

Saving the Interface

Click the Apply button to save the interface You can press the OK button to save and close the interface. The Cancel button closes the interface without saving it. Interfaces are saved in the Work Repository.

3-16

Note
y ce ma nt er f a An i han more t hav e ur ce. one so esso n, r t h i s l y u se Fo l o nl we wil ce. ur one so

3-17

Interfaces: The Flow

3-18

Graphical Representation of the Flow


Source and target systems are graphically represented in the Flow tab This is where KM are chosen, and KM options are set

3-19

KM and KM Options
Click on the caption to display Loading KM choices and options Click on the caption to display the Integration KM choices and options

Select the appropriate KM

Set the option values as needed

3-20

10

Important Note
ant ! Import r e t ha t ake su M o pr i at e e ap p r th

dge K now l e hav e s M o dul e por t ed

m been i project ! e i nt o t h

3-21

Interfaces: Execution

3-22

11

Requirements
To run an interface, you need at least the following:
A target table An Integration Knowledge Module (selected in the Flow tab) A Loading Knowledge Module if there is a remote source.

If you have all the prerequisites, you are ready to execute the interface.

3-23

Running an Interface
Simply click the Execute button

3-24

12

Follow-up of the Execution: Logs and Generated Code

3-25

Code Generation
When we ask ODI to Execute the transformations, ODI will generate the necessary code for the execution (usually SQL code) The code is stored in the repository The execution details are available in the Operator Interface:
Statistics about the jobs (duration, number of records processed, inserted, updated, deleted) Actual code that was generated and executed by the database Error codes and error messages returned by the databases if any

3-26

13

The Operator Interface


Start the operator interface from the Windows menu or from the ODI toolbar

3-27

Refresh the Logs Display


By default, ODI will not refresh the logs. There are two ways to refresh the logs:
Manual refresh: click on this icon in the toolbar: Automatic refresh: Set the refresh rate (in seconds) in the toolbar and click on this icon in the toolbar:

3-28

14

Multiple Levels of Details


Job level details

Specific step in the job

Actual code sent to the systems (SQL or other)

3-29

Errors Reporting

The red icon in the tree indicates the steps that failed Error Codes and Error Messages are reported at all levels

3-30

15

Information Available for each Level


Time Information Statistical Information Generated Code

3-31

Understanding the Operator Icons


Running Success Failure Warning Waiting to be executed Queued by the agent`

3-32

16

Course Summary
Create Interfaces Create Interfaces and define and define transformations transformations (mappings) (mappings) Understand Data Understand Data Flows, Select Flows, Select KMs and set KMs KMs and set KMs options options

Execute an Execute an Interface Interface

Understand how Understand how to follow-up on to follow-up on the execution the execution

3-33

3-34

17

Oracle Data Integrator Transformations: Adding More Complexity

4
4-1

Objectives
After completing this lesson, you will:
Understand how to design an interface with multiple sources. Know how to define relations between the source using joins. Better understand an interfaces flow. Be able to customize the default flow of an interface. Be able to appropriately choose a Staging Area

4-2

Adding More than One Source

4-3

Multiple Sources

You can add more than one source datastore to an interface. These datastores must be linked using joins. Two ways to create joins:
References in the models automatically become joins in the diagram. Joins must be manually defined in the diagram for isolated datastores.

4-4

Note

an Import

t!

res atasto All d e must b or y directl y tl indirec . joined


4-5

Manually Creating a Join

1.

Drag and drop a column from one datastore onto a column in another datastore.
A join linking the two datastore appears in the diagram. In the join code box, an expression joining the two columns also appears.

2.

Modify the join expression to create the required relation.


You can use the expression editor.

3. 4.

Check the expressions syntax if possible. Test the join if possible.

4-6

Setting up a Join
Joins can be defined across technologies (here a database table and a flat file) The number of joins per interface is not limited

SQL join expression (technology dependant) Execution location

Validate expression

Expression editor Save expression Join order (ISO-92 Syntax) Use ISO-92 syntax Automatically calculate order

Join type Inner/Outer, Left/Right.

4-7

Types of Joins
The following type of joins exist:
Cross Join Cartesian Product. Every combination of any Customer with any Order, without restriction. Only records where a customer and an order are linked. All the customers combined with any linked orders, or blanks if none. All the orders combined with any linked customer, or blanks if none. All customers and all orders.

Inner Join

Left Outer Join

Right Outer Join

Full Outer Join

4-8

Advanced Considerations on Filters, Joins, Mappings

4-9

Options for Filters, Joins and Mappings

Active Mapping
When unchecked, the filter, join or mapping is disabled for this interface

Enable mapping for update and/or insert


Allows mappings to only apply to updates or inserts. By default, both insert and update are enabled

Choose the update key by selecting the Key checkbox Change the execution location of the filter, join or mapping.

4-10

Setting Options for Filters, Joins and Mappings


Activate/Deactivate For mappings, filters or joins Execution Location For mappings, filters or joins Insert/Update For mappings Part of the Update Key For target columns (mappings)

Active Mapping When unchecked, the filter, join or mapping is disabled for this interface Enable mapping for update and/or insert Allows mappings to only apply to updates or inserts. By default, both insert and update are enabled Choose the update key by selecting the Key checkbox Change the execution location of the filter, join or mapping.

4-11

Note Update Keys for Flow Control


pdates for m U T o pe r ow Fl or use you must l, C ont r o pdate e an u defi n r t he key f o ce i nt er f a

4-12

What is an Update Key?

An update key: is a set of columns capable of uniquely identifying one row in the target datastore is used for performing updates and flow control can be:
one of the primary/unique keys defined for the datastore defined specially for the interface

4-13

How to Define the Update Key


1. 2. 3. Go to the Diagram tab of the interface Select the Target Datastore. Select the Update Key in the properties panel.

To define a new key in the Interface only 1. Choose <Undefined> for the update key. 2. Select one target column to make part of the update key. 3. Check the Key checkbox in the properties panel. 4. Repeat for each column in the update key. To define a new key for the table that could be used in other interfaces 1. Go back in the Model 2. Expand the table 3. Right-click on Constraints and add a new key (more on this in a later chapter)

4-14

How to Change the Execution Location


For mappings, filters and joins, you can choose where the operation will take place: source database, staging area or target database (mappings only, and for the mappings, only literals and database functions) 1. Go to the interfaces Diagram tab 2. Select the filter, join or mapping to edit. 3. Select an execution location from the properties panel.
Not every execution location is always possible. Must be set to Active first.

4-15

Why Change the Execution Location?

You may need to change the execution location if: The technology at the current location does not have the features required
Files, JMS, etc do not support transformations A required function is not available

The current location is not available for processing


The machine cant handle any more demand

ODI does not allow this location


It is not possible to execute transformations on the target.

4-16

Note Moving the Staging Area

n e wh e ke car Ta ing the tion cha ng l oca cut i on exe in g t h e or m ov ar ea t agi ng s . Y ou cation lo ou bl e oul d d sh t he che ck at i o n n sf o r m tra . sy nt ax
4-17

Data Flow Definition

4-18

What is the Flow?

Flow The path taken by data from the sources to the target in an ODI interface. The flow determines where and how data will be extracted, transformed, then integrated into the target.

4-19

Note

and i ng nde r st U will e flow y th man av oi d at run bl e m s pr o time. g t hi s asterin ill help M pt w con ce pr ov e u t o im e . yo manc per f or
4-20

10

What Defines the Flow?

Three factors: Where the staging area is located


On the target, on a source or on a third server

How mappings, filters and joins are set up


Execution location: Source, target or staging area Whether transformations are active

Choice of Knowledge Modules


LKM: Loading Knowledge Module IKM: Integration Knowledge Module

4-21

A Data Integration Scenario


Filter - ORDERS.STATUS=CLOSED

Source Sybase
ORDERS

Target Oracle

Mapping - SALES = SUM(LINES.AMOUNT) + CORRECTION.VALUE. - SALES_REP = ORDERS.SALES_REP_ID

LINES

SALES

CORRECTIONS File Join - ORDERS.ORDER_ID = LINES.ORDER_ID

4-22

11

The Basic Process


Sequence of operations with or without an integration tool
Source: Sybase
Transform & Integrate ORDERS

Target: Oracle
SALES

11
LINES Extract/Join/Transform

C$_0

55 33
I$_SALES

Join/Transform

CORRECTIONS File

22
Extract/Transform

C$_1

4-23

What Is the Staging Area?

Staging Area A separate, dedicated area in an RDBMS where ODI creates its temporary objects and executes some of your transformation rules. By default, ODI sets the staging area on the target data server.

4-24

12

Case Study: Placing the Staging Area

The Staging Area may be located:


On the target database (default). On a third RDBMS database or the Sunopsis Memory Engine. On the source database.

The Staging Area cannot be placed on non relational systems (Flat files, ESBs, etc.)

4-25

Note Staging Area Must Be an RDBMS


d locate hemas c logies Only s techno ing MS on RDB as the stag t , LDAP can ac , MOM iles not. area. F P bases can A and O L of the target t he nW hen s a no rface i nology, the int e S tech ust be RDBM area m r g stagin anothe ved to mo a. schem
4-26

13

How to change the Staging Area

1. 2.

3.

4.

Go to the interfaces Definition tab of your Interface. To choose the Staging Area, check the Staging Area Different From Target option, then select the logical schema that will be used as the Staging Area. To leave the Staging area on the target, uncheck the Staging Area Different From Target option Go to the Flow tab. You can now see the new flow.

4-27

Case #1:Staging Area on Target

Target (Oracle) Source (Sybase)


ORDERS

Staging Area
Transform & Integrate

11
LINES Extract/Join/Transform

C$_0

55 33
I$_SALES

SALES

Join/Transform

CORRECTIONS File

22
Extract/Transform

C$_1

4-28

14

Case #1 in ODI
Staging Area in the Target

Staging Area + Target

Source Sets

4-29

Case #2: Staging on Middle Tier

DB2 UDB, Sunopsis Engine, etc. Source (Sybase) Staging Area


ORDERS Transform & Integrate

Target (Oracle)
SALES

11
LINES Extract/Join/Transform

C$_0

55 33
I$_SALES

Join/Transform

CORRECTIONS File

22
Extract/Transform

C$_1

4-30

15

Case #2 in ODI
Staging Area is the Sunopsis Memory Engine

Target

Source Sets

Staging Area

4-31

Case #3: Staging on Source

Source (Sybase)
ORDERS

Staging Area 11
C$_0

Transform & Integrate

Target (Oracle)
SALES

55 33
I$_SALES

LINES

Extract/Join/Transform Join/Transform

C$_1

22
CORRECTIONS File Extract/Transform

4-32

16

Case #3 in ODI
Staging Area in the Source

Target

Source Sets

Staging Area

4-33

Note Staging Area Syntax

ce of e c hoi Th g ar ea st ag i n t he r m i n es by al l det e use d y nt ax s ilters ings, f m ap p ecut ed i ns e x and j o t her e.

4-34

17

Which KMs for What Flow?

When processing happens between two data servers, a data transfer KM is required.
Before integration (Source Staging Area) Requires an LKM, which is always multi-technology At integration (Staging Area Target) Requires a multi-technology IKM

When processing happens within a data server, it is entirely performed by the server.
A single-technology IKM is required. No data transfer is performed

4-35

Which KMs for What Flow?


Four possible arrangements:
Loading phase Multi-tech LKM Multi-tech LKM (No LKM needed)
Staging area on source

Source

Staging area

Integration phase Multi-tech IKM Single-tech IKM


Staging area on target

Target

Multi-tech IKM Single-tech IKM

(No LKM needed)

Source, staging area and target in same location

4-36

18

More on KMs

KMs can skip certain operations


Unnecessary temporary tables will not be created

Some KMs lack certain features


Multi-technology IKMs can not perform Flow control IKMs to File, JMS, etc do not support Static control

All KMs have configurable Options

4-37

Case #1
Using the Target as the Staging Area
Target (Oracle) Source (Sybase)
ORDERS

Staging Area

LKM_1 LKM_1
LINES LKM SQL to Oracle

C$_0

IKM_1 IKM_1 IKM_1 IKM_1


I$_SALES

SALES

IKM Oracle Incremental Update

CORRECTIONS File

LKM_2 LKM_2
LKM File to Oracle (SQLLDR)

C$_1

IKM Oracle Incremental Update

4-38

19

Case #2
Using a third server as the Staging Area
Sunopsis Memory Engine Source (Sybase) Staging Area
ORDERS IKM SQL to SQL Append

IKM_1 IKM_1 LKM_1 LKM_1


C$_0

Target (Oracle)
SALES

LINES LKM SQL to SQL

IKM_1 IKM_1
C$_1

I$_SALES

CORRECTIONS File

LKM_2 LKM_2
LKM File to SQL

IKM SQL to SQL Append

4-39

Case #3
Using the Source as the Staging Area

Source (Sybase)
ORDERS

IKM SQL to SQL Append

Target (Oracle)
SALES

Staging Area IKM_1 IKM_1


C$_0 IKM SQL to SQL Append I$_SALES

IKM_1 IKM_1

LINES

IKM SQL to SQL Append

IKM_1 IKM_1
C$_1

LKM_1 LKM_1
CORRECTIONS File LKM File to SQL

4-40

20

How to a Specify an LKM

1. 2.

Go to the interfaces Flow tab. Select the Source Set from which data will be extracted.
The KM property panel opens.

3. 4. 5.

Change the Name of the Source Set (optional) Select an LKM. Modify the LKMs Options.

4-41

Note Default KMs

s es a I choo OD t KM d e f a u l r p o s sib l e . ve w h er e ar s i n g appe fault A fl a if a de w the flo ed (X) or if us . KM is set (X) K M is no

4-42

21

How to Specify an IKM


1. 2. Go to the interfaces Flow tab. Select the Target.
The KM property panel opens.

3. 4. 5.

Check/Uncheck Distinct Rows. Select an IKM. Set the IKMs Options.

4-43

Common KM Options
The following options appear in most KMs:
INSERT UPDATE COMMIT
FLOW CONTROL STATIC CONTROL TRUNCATE DELETE ALL
DELETE TEMPORARY OBJECTS

Should data be inserted/updated in the target?


Should the interface commit the insert/updates? If no, a transaction can span several interfaces. Should data in the flow be checked? Should data in the target be checked after the interface? Should the target data be truncated or deleted before integration?

Should temporary tables and views be deleted or kept for debugging purposes? 4-44

22

Note The Staging Area Trade-off


aging and St osen s The KM uld be ch ho ty Area s e the quanti c et to redu ransferred y t of data he required et ion provid format a trans g dat eckin and ch ies. ilit capab

4-45

Lesson Summary
Using multiple, Using multiple, heterogeneous heterogeneous source source datastores datastores

Locating the Locating the Staging Area Staging Area

Understanding Understanding the Flow the Flow

Creating joins Creating joins

Choosing Knowledge Choosing Knowledge Modules Modules

4-46

23

4-47

24

Oracle Data Integrator Quick Introduction to Metadata Management

5
5-1

Objectives
After completing this lesson, you should understand:

Have a generic understanding of the Metadata in ODI Be ready to do a more exploratory hands on tying together metadata and advanced transformations

5-2

Metadata in ODI
Metadata in ODI are available in the Model tab. Each Model will contain the tables from database schema. A model can contain all tables from a schema, or only a subset of the tables of the schema Models can contain sub models for an easier organization of the tables from a schema

5-3

A Special Case: XML


ODI comes with its own JDBC driver for XML files. The XML file will be viewed as a database schema where:
- Elements become tables - Attributes of the elements become columns of the tables

To maintain the hierarchical view of the XML file, the driver will automatically create primary keys and foreign keys. To retain the order in which the records appear in the XML file, the driver will add an Order column.

5-4

Lesson summary
Introduction to Introduction to Models Models

5-5

5-6

Oracle Data Integrator Data Quality (Integrity Control)

6
6-1

Objectives
After completing this lesson, you will:
Know the different types of data quality business rules ODI manages. Be able to enforce data quality with ODI. Understand how to create constraints on datastores.

6-2

When to Enforce Data Quality?


The IS can be broken into 3 sub-systems
Source application(s) Data integration process(es) Target application(s)

Data Quality should be managed in all three sub-systems ODI provides the solution for enforcing quality in all three.

6-3

Data Quality Business Rules


Defined by designers and business analysts Stored in the Metadata repository May be applied to application data Defined in two ways:
Automatically retrieved with other metadata Rules defined in the databases Obtained by reverse-engineering Manually entered by designers User-defined rules

6-4

From Business Rules to Constraints


De-duplication rules
Primary Keys Alternate Keys Unique Indexes

Reference rules
Simple: column A = column B Complex: column A = function(column B, column C)

Validation rules
Mandatory Columns Conditions

6-5

Overview of the Data Quality System

Source
ORDERS Errors Integration Process LINES

Target

SALES

Static Control is started - Automatically (scheduled) - manually

Errors

Flow Control is started - by Interfaces during execution


CORRECTIONS File

Static Control is started - by Interfaces after integration - by Packages - manually

Error Recycling is performed - by Interfaces

6-6

Static/flow Control Differences


Static Control (static data check)
Checks whether data contained in a datastore respects its constraints. Requires a primary key on the datastore.

Flow Control (dynamic data check)


Enforces target datastore constraints on data in the flow. Requires an update key defined in the interface. You can recycle erroneous data back into the flow.

6-7

Properties of Data Quality Control


Static and flow checks can be triggered:
by an interface (FLOW and/or STATIC) by a package (STATIC) manually (STATIC)

require a Check Knowledge Module (CKM) are monitored through Operator copy invalid rows into the Error table
Flow control then deletes them from flow. Static control leaves them in data stores. Error table can be viewed from Designer or any SQL tool.

6-8

Constraints in ODI
Mandatory Columns Keys
Primary Keys Alternate Keys Indexes

References
Simple: column A = column B Complex: column A = function(column B)

Conditions

6-9

Mandatory Columns

1. Double-click the column in the Models view. 2. Select the Control tab. 3. Check the Mandatory option. 4. Select when the constraint should be checked (Flow/Static).

6-10

Keys

1. 2. 3. 4. 5. 6.

Select the Constraints node under the datastore. Right-click, select Insert Key. Fill in the Name. Select the Key or Index Type Go to the Columns tab Add/remove columns from the key.

6-11

Checking Existing Data with a New Key

1. 2.

3. 4.

Go to the Control tab. Select whether the key is Defined in the Database, and is Active Select when the constraint must be checked (Flow/Static). Click the Check button to perform a synchronous check of the key.

Number of duplicate rows

6-12

Note Synchronous Check Limitations


ant ! Import ronous rk S yn ch o on l y w he cks c s ed S Q L -b a on s. sy st e m e lt of th e r e su Th s n ot he ck i c sa ve d .
6-13

Creating a Reference

1. 2. 3. 4. 5.

Select the Constraints node under the datastore Right-click, select Insert Reference Fill in the Name Select the reference type
User Reference Complex Reference Set the model and table to <undefined> to manually enter the catalog, schema and table name.

Select a Parent Model and Table

6-14

Creating a User Reference

1. 2. 3. 4.

5.

Go to the Columns tab Click the Add button Select the column from the Foreign Key table. Select the corresponding column from the Primary Key table. Repeat for all column pairs in the reference.

6-15

Creating a Complex Reference

1. 2. 3.

Go to the Expression tab Set the Alias for the Primary Key table. Code the Expression
Prefix with the tables aliases Use the Expression Editor.

6-16

Checking Existing Data with a New Reference

1. 2. 3.

Go to the Control tab. Choose when the constraint should be checked (Flow/Static). Click the Check button to immediately check the reference.
Not possible for heterogeneous references.

6-17

Creating a Condition

1.

2. 3. 4. 5.

Right-click Constraints node, select Insert Condition Fill in the Name. Select ODI Condition type. Edit the condition clause
Use the Expression Editor

Type in the error message for the condition.

6-18

Checking Existing Data with a New Condition

1. 2.

3.

Go to the Control tab Select when the constraint must be checked (Flow/Static). Click the Check button to perform a synchronous check of the condition.

6-19

Data Quality in the Interfaces

6-20

10

How to Enforce Data Quality in an Interface


The general process: 1. Enable Static/Flow Control 2. Set the options 3. Select the Constraints to enforce
Table constraints Not null columns

4.

Review the erroneous records

6-21

How to Enable Static/Flow Control


1. 2. 3. Go to the interfaces Flow tab. Select the target datastore.
The IKM properties panel appears.

4.

Set the FLOW_CONTROL and/or STATIC_CONTROL IKM options to Yes. Set the RECYCLE_ERRORS to Yes, if you want to recycle errors from previous runs

6-22

11

How to Set the Options


1. 2. 3. 4. Select the interfaces Controls tab. Select a CKM. Set up the CKM Options. Set the Maximum Number of Errors Allowed.
Leave blank to allow an unlimited number of errors. To specify a percentage of the total number of integrated records, check the % option.

6-23

How to Select Which Constraints to Enforce


For flow control: For most constraints:
1. 2. Select the interfaces Controls tab. For each constraint you wish to enforce, select Yes. Select the interfaces Diagram tab. Select the Target datastore column that you wish to check for nulls. In the column properties panel, select Check Not Null.

For Not Null constraints:


1. 2. 3.

6-24

12

Differences Between Control Types


Static control Launched via CKM Defined on Options defined on Constraints defined on Invalid rows deleted (Default KM behavior) Model
Model

Flow control Interface


Interface

Interface
Interface

Model

Interface

Interface

Model

Model

Interface

Possible

Never

Always

6-25

How to Review Erroneous Records


First, execute your interface. To see the number of records: 1. Select the Execution tab. 2. Find the most recent execution.
The No. of Errors encountered by the interface is displayed.

To see which records were rejected: 1. Select the target datastore in the Models view. 2. Right-click > Control > Errors 3. Review the erroneous rows.

6-26

13

Lesson summary
Enabling Quality Enabling Quality Control Control

Manually creating Manually creating constraints constraints

Data quality Data quality business business rules rules

How to enforce data How to enforce data quality quality

Setting Setting Options Options

6-27

6-28

14

Oracle Data Integrator Metadata Management

7
7-1

Objectives
After completing this lesson, you should understand:

Why Metadata are important in ODI Where to find your database metadata in ODI How to import Metadata from your databases How to use ODI to generate your models

7-2

Why Metadata?
ODI is strongly based on the relational paradigm. In ODI, data are handled through tabular structures defined as datastores. Datastores are used for all type of real data structures: database tables, flat files, XML files, JMS messages, LDAP trees, The definition of these datastores (the metadata) will be used in the tool to design the data integration processes. Defining the datastores is the starting point of any data integration project

7-3

Models

7-4

Model Description
Models are the objects that will store the metadata in ODI. They contain a description of a relational data model. It is a group of datastores stored in a given schema on a given technology. A model typically contains metadata reverse-engineered from the real data model (Database, flat file, XML file, Cobol Copybook, LDAP structure) Database models can be designed in ODI. The appropriate DDLs can then be generated by ODI for all necessary environments (development, QA, production)

7-5

Terminology
All the components of relational models are described in the ODI metadata:
Relational Model Table; Column Not Null; Default value Primary keys; Alternate Keys Indexes; Unique Indexes Foreign Key Check constraint Description in ODI Datastore; Column
Not Null / Mandatory; Default value Primary keys; Alternate keys Not unique indexes; Alternate keys Reference Condition

7-6

Additional Metadata
Filters
Apply when data is loaded from a datastore.

Heterogeneous references
Link datastores from different models/technologies

Additional technical/functional metadata


OLAP type on datastores Slowly changing dimension behavior on columns Read-only data types/columns User-defined metadata (FlexFields)

7-7

Importing Metadata: The Reverse Engineering Process

7-8

Two Methods for Reverse Engineering


Standard reverse-engineering
Uses JDBC connectivity features to retrieve metadata, then writes it to the ODI repository. Requires a suitable driver

Customized reverse-engineering
Read metadata from the application/database system repository, then writes these metadata in the ODI repository Uses a technology-specific strategy, implemented in a Reverseengineering Knowledge Module (RKM)

7-9

Standard vs. Customized ReverseEngineering


File-specific reverse-engineering
Fixed format
COBOL copybooks

ODI Repository
Model (Metadata)

Oracle Data Integrator

Delimited format

MS SQL Server

JDBC Driver
Standard Reverse-engineering

Data Model

System tables

Customized Reverse-engineering

7-10

Other Methods for Reverse-Engineering

Delimited format reverse-engineering


File parsing built into ODI.

Fixed format reverse-engineering


Graphical wizard, or through COBOL copybook for Mainframe files.

XML file reverse-engineering (Standard)


Uses Sunopsis JDBC driver for XML.

LDAP directory reverse-engineering (Standard)


Uses Sunopsis JDBC driver for LDAP.

7-11

Note
g ineerin se- eng . R eve r ent al ncr em is i at a i s m et ad N ew but ol d dd ed, t a a is n o et ad at m ed. r em ov

7-12

Reverse Engineering In Action

7-13

Create and Name the New Model

1. 2. 3. 4. 5. 6.

Go to the Models view. Select Insert Model. Fill in the Name (and Code). Select the model Technology. Select the Logical Schema where the model is found. Fill in the Description (optional).

7-14

Note
always del is A mo given d in a defi n e l ogy. t echno ge a u chan nology, I f yo t ec h odels m check ust related yo u m j ec t r e ob every o d e l. t hat m to
7-15

How to Define a Reverse-Engineering Strategy


1. 2. 3. 4. 5. Go to the Reverse tab. Select the Reverse-engineering type. Select the Context for reverseengineering. Select the Object Type (optional). Type in the object name Mask and Characters to Remove for the Table Alias (optional). If customized:
Select the RKM. Select the Logical Agent.

6.

7-16

Optional: Selective Reverse-Engineering

1. 2. 3.

4. 5. 6.

Go to the Selective Reverse tab (Standard reverse only). Check the Selective Reverse option. Select New Datastores or/and Existing Datastores. Click Objects to Reverse. Select the datastores to reverseengineer. Click the Reverse button.

7-17

How to Start the Process


If using customized reverseengineering:
1. 2. 3. Click the Reverse button. Choose a log level, then click OK. Use Operator to see the results.

If using standard reverse-engineering:


1. 2. 3. Optionally, set up Selective Reverse. Click the Reverse button. Follow the progress in the status bar.

7-18

Generating Metadata: The Common Format Designer

7-19

Add Elements Missing From Models


Some metadata cannot be reverse-engineered
JDBC driver limitations

Some metadata cannot exist in the data servers


No constraints or keys on files or JMS messages Heterogeneous joins OLAP, SCD, etc.. User-defined metadata

Some business rules are not implemented in the data servers.


Models implemented with no constraints Certain constraints are implemented only at the application level.

7-20

10

Fleshing Out Models


ODI enables you to add, remove or edit any model element manually.
You do this in Designer.

The model Diagram is a graphical tool to edit models.


Requires the Common Format Designer component. You can update the database with your changes.

7-21

Lesson summary

Relational models Relational models

ReverseReverseengineering engineering

Fleshing out models: Fleshing out models: why and how why and how

7-22

11

7-23

12

Oracle Data Integrator Topology: Connecting to the World

8
8-1

Objectives
After completing this course, you will:
Understand the basic concepts behind the Topology interface. Understand logical and physical architecture. Know how to plan a Topology. Have learnt current best practices for setting up a Topology.

8-2

What is the Topology? Topology The representation of the information system in ODI:
Technologies: Oracle, DB2, File, etc. Datatypes for the given technology Data Servers for each technologies Physical Schemas under each data server ODI Agents (run-time modules) Definition of Languages and Actions

8-3

The Physical Architecture

8-4

Properties of Physical Schemas


An ODI physical schema always consists of 2 data server schemas:
The Data Schema, which contains the datastores The Work Schema, which stores temporary objects

A data server schema is technology-dependant.


Catalog Name and/or Schema Name Example: Database and Owner, Schema

A data server has:


One or more physical schemas One default physical schema for server-level temporary objects

8-5

Concepts in Reality

Technology
Oracle Microsoft SQL Server Sybase ASE DB2/400 Teradata Microsoft Access JMS Topic File

Data server
Instance Server Server Server Server Database Router File Server

Schema
Schema Database/Owner Database/Owner Library Schema (N/A) Topic Directory

8-6

Important Notes
ded mmen ly reco erver you rong s It is st h data rea for or eac hat f ated a s and t edic ad ject create rary ob hema. tempo Sc ODIs Work as the u s e it rver, ata se or each d al schema f Under sic a phy f the define ision o ed. ub-div each s at will be us th server

8-7

Example Infrastructure
Production site: Boston Windows
MS SQL Server

Linux
Oracle 9i
ACCOUNTING

db_dwh db_purchase

Oracle 10g
SALES

Production site: Tokyo Windows


MS SQL Server A

Windows
MS SQL Server B

Linux
Oracle

db_dwh db_purchase

ACCT SAL

8-8

The Physical Architecture in ODI


MSSQL-Boston
db_dwh db_purchase

Oracle-Boston9
ACCOUNTING

Oracle-Boston10
SALES

Legend
Data server
Physical schema

MSSQL-TokyoA
dwh

MSSQL-TokyoB

Oracle-Tokyo
ACCT

purchase

SAL

8-9

Prerequisites to Connect to a Server


Drivers (JDBC, JMS)
Drivers must be installed in /oracledi/drivers This should be done on all machines connecting to the data server.

Connection settings (server dependant)


Machine name (IP Address), port User/Password Instance/Database Name,

8-10

Important Note
e is er nam all T he u s c e ss d t o a c ch e m a s, us e i ng s s nderly ib r a r i e u es o r l s databa ta server. da i n t he i s u se r sure th M a ke n t has le ges. acc ou privi ficient su f

8-11

Creating a Data Server


1. 2. 3. 4.

Right-click the technology of your data server Select Insert Data Server Fill in the Name Fill in the connection settings:
Data Server User and Password

(Optional) JNDI Connection

8-12

Creating a Data Server - JDBC


Select URL Select driver

1. 2. 3. 4. 5.

Select the JDBC tab Fill in the JDBC driver Fill in the JDBC URL Test the connection Click OK

8-13

The JDBC URL


The JDBC driver uses a URL to connect to a database system.
The URL describes how to connect to the database system. The URL may also contain driver-specific parameters

Use the select button to choose the driver class name and URL template.

8-14

Testing a Data Server connection


1. 2.

Click the Test button Select the Agent to test this Connection
Local (No Agent) performs the test with the Topology Manager GUI.

3.

Click Test The driver must be installed

8-15

Note test the connection


st the ays te A lw n to nectio he con at t ec k t h ch er is ta serv da tly correc ed. ur config

8-16

Creating a Physical Schema


1. 2.

Right-click the data server and select Insert Physical Schema Select or fill in:
Data Schema Work Schema

3. 4.

Select whether this is the Default schema Click OK


A warning appears

8-17

The Logical Architecture

8-18

What is a Logical Schema? Developers should not have to worry about the actual location of the data servers, or the updates in user names, IP addresses, passwords, etc. To isolate them from the actual physical layer, the administration will create a Logical Schema that is simply an alias for the physical layer.

8-19

Alias vs. Physical Connection


Datawarehouse
(Logical Schema)

Logical Architecture: the Alias

Physical Architecture: the Physical Connection


Windows
MS SQL Server db_dwh

User: Srv_dev Password: 12456 IP:10.1.3.195 Database: db_dwh

Development site: New York, NY

8-20

10

Modifications of the Physical Connection


Datawarehouse
(Logical Schema)

Logical Architecture: the Alias


Changes in the actual physical information have no impact on the developers who always refers to the same logical alias

Physical Architecture: the Physical Connection


Windows
MS SQL Server db_dwh

User: Srv_prod Password: 654321 IP:10.1.2.221 Database: db_dwh

Production Server: Houston, TX

8-21

Mapping Logical and Physical Resources


Datawarehouse
(Logical Schema)

Logical Architecture
But changing the connectivity from one server to the other can become painful

Physical Architecture
Windows
MS SQL Server db_dwh

Windows
MS SQL Server A dwh

Windows
MS SQL Server db_dwh db_purchase

Development site: New York, NY

QA: New York

Production site: Houston, TX

8-22

11

Mapping Logical and Physical Resources


Datawarehouse
(Logical Schema)

Logical Architecture Contexts


ve lop me nt
QA

For that purpose, the definition of Contexts will allow you to attach more than one physical definition to a Logical Schema

n tio uc od Pr

Physical Architecture
Windows
MS SQL Server db_dwh

De

Windows
MS SQL Server A dwh

Windows
MS SQL Server db_dwh db_purchase

Development site: New York

Production site: Tokyo

Production site: Boston

8-23

Mapping Logical and Physical Resources


CRM
(Logical Schema)

Datawarehouse
(Logical Schema)

Purchase
(Logical Schema)

Logical Architecture

Contexts Physical Architecture

Production

Production

Production

Unix

Windows
MS SQL Server db_dwh db_purchase

Of course, a given context will map all physical connections

MS SQL Server

CRM

Production site: Boston

8-24

12

Note Design-Time vs. Run-Time


r data esign o s is , the d In ODI on processe ti integra h logical it done w s. ce resour n is ecutio me, ex cular -ti At run n a parti di I will starte and OD ated t, contex e associ t th for tha select urces al reso physic t. contex
8-25

Notes
main may re ources ysical l res ny ph . Logica ntexts ed to a nmapp in a given co u ce t resour canno ource . ed res p xt Unmap in the conte used rce be l resou l hysica ra le p in seve A sing apped em may b ts. al contex a logic t ntext, ven co pped at mos In a gi ma ce is urce. resour al reso physic to one

8-26

13

Logical Architecture/Context views

Technology

Logical Schema Context Logical Agent

The same technologies are displayed in Physical and Logical Architecture views. You can reduce the number of technologies displayed
Windows > Hide Unused Technologies

8-27

Linking Logical and Physical Architecture

1. 2. 3. 4. 5. 6.

Double-click the context Go to the Agents tab For each logical agent, select the corresponding physical agent in the context. Go to the Schemas tab For each logical schema, select the corresponding physical schema in the context. Click OK.

8-28

14

Planning Ahead for Topology

8-29

Planning the Topology


1.

Identify the physical architecture


All data servers All physical schemas Required physical agents

2. 3.

Identify the contexts Define the logical architecture


Name the logical schemas Name the logical agents

4.

On paper, write out a matrix of logical/physical mappings


This matrix helps you plan your topology

8-30

15

Matrix of Logical/Physical Mappings

Logical Schemas Accounting


ACCOUNTING in Oracle on Windows ACCT in Oracle on Linux

Contexts Development

Sales
SALES in Oracle on Windows

Tokyo

8-31

Advanced Topology: More on JDBC

8-32

16

Creating a Data Server JNDI


Extra properties

1. 2.

Select the JNDI tab Set the JNDI parameters


Authentication User/Password Protocol Driver URL Resource

3. 4.

Run connection Test Click OK

8-33

JDBC Driver
A JDBC driver is a Java driver that provides access to a type of database.
Type 4: Direct access via TCP/IP Type 3: Three- tier architecture Type 2: Requires the database client layer Type 1: Generic driver to connect ODBC data sources.

Drivers are identified by a Java class name.


Class must be in present on the classpath.

Drivers are distributed as .jar or .zip files


Should be copied to the /oracledi/drivers directory.

8-34

17

Some Examples of Drivers and URLs

Technology
Oracle Microsoft SQL Server Sybase (ASE, ASA, IQ) DB2/UDB (type 2) DB2/400 Teradata Microsoft Access (type 1) File (Sunopsis driver)

Driver
oracle.jdbc.driver.OracleDriver com.inet.tds.TdsDriver com.sybase.jdbc2.jdbc.SybDriver COM.ibm.db2.jdbc.app.DB2Driver
com.ibm.as400.access.AS400JDBCDriver

URL
jdbc:oracle:thin:@<host>:<port>:<sid> jdbc:inetdae7:<host>:<port> jdbc:sybase:Tds:<host>:<port>/[<db>] jdbc:db2:<database> jdbc:as400://<host>[;libraries=<library>] jdbc:teradata://<host>:<port>/<server> jdbc:odbc:<odbc_dsn_alias> jdbc:snps:dbfile

com.ncr.teradata.TeraDriver sun.jdbc.odbc.JdbcOdbcDriver com.sunopsis.jdbc.driver.file.FileDriver

8-35

Lesson summary
Defining your Defining your topology topology

Physical and Physical and logical agents logical agents

Overview of Overview of topology topology

Data servers & Data servers & physical schemas physical schemas

Logical schemas & Logical schemas & contexts contexts

8-36

18

8-37

19

Oracle Data Integrator Knowledge Modules

9
9-1

Objectives
After completing this lesson, you will:
Understand the structure and behavior of Knowledge Modules Be able to modify Knowledge Modules and create your own behavior

9-2

Definition
Knowledge Modules are templates of code that define integration patterns and their implementation They are usually written to follow Data Integration best practices, but can be adapted and modified for project specific requirements Example:
When loading data from a heterogeneous environment, first create a staging table, then load the data in the staging table. To load the data, use SQL loader. SQL loader needs a CTL file, create the CTL file for SQL loader. When finished with the integration, remove the CTL file and the staging table

9-3

Which KMs for What Flow?

When processing happens between two data servers, a data transfer KM is required.
Before integration (Source Staging Area) Requires an LKM, which is always multi-technology At integration (Staging Area Target) Requires a multi-technology IKM

When processing happens within a data server, it is entirely performed by the server.
A single-technology IKM is required. No data transfer is performed

9-4

More on KMs

KMs can skip certain operations


Unnecessary temporary tables will not be created

Some KMs lack certain features


Multi-technology IKMs can not perform Flow control IKMs to File, JMS, etc do not support Static control

All KMs have configurable Options

9-5

Case #1
Using the Target as the Staging Area
Target (Oracle) Source (Sybase)
ORDERS

Staging Area

LKM_1 LKM_1
LINES LKM SQL to Oracle

C$_0

IKM_1 IKM_1 IKM_1 IKM_1


I$_SALES

SALES

IKM Oracle Incremental Update

CORRECTIONS File

LKM_2 LKM_2
LKM File to Oracle (SQLLDR)

C$_1

IKM Oracle Incremental Update

9-6

Case #2
Using a third server as the Staging Area
Sunopsis Memory Engine Source (Sybase) Staging Area
ORDERS IKM SQL to SQL Append

IKM_1 IKM_1 LKM_1 LKM_1


C$_0

Target (Oracle)
SALES

LINES LKM SQL to SQL

IKM_1 IKM_1
C$_1

I$_SALES

CORRECTIONS File

LKM_2 LKM_2
LKM File to SQL

IKM SQL to SQL Append

9-7

Case #3
Using the Source as the Staging Area

Source (Sybase)
ORDERS

IKM SQL to SQL Append

Target (Oracle)
SALES

Staging Area IKM_1 IKM_1


C$_0 IKM SQL to SQL Append I$_SALES

IKM_1 IKM_1

LINES

IKM SQL to SQL Append

IKM_1 IKM_1
C$_1

LKM_1 LKM_1
CORRECTIONS File LKM File to SQL

9-8

KM Types
There are five different types of knowledge modules:

KM Type Interfaces

Description
Assembles data from source datastores to the staging area. Uses a given strategy to populate the target datastore from the staging area. Checks data in a datastore or during an integration process. Retrieves the structure of a data model from a database. Only needed for customized reverse-engineering. Sets up a system for Changed Data Capture to reduce the amount of data that needs to be processed. Defines the code that will be generated to create Data Web Services (Exposing data as a web service)

LKM IKM CKM RKM JKM SKM

Loading Integration Check Reverseengineering Journalizing Web Services

Models

9-9

Which KMs for What Flow?


Four possible arrangements:
Loading phase Multi-tech LKM Multi-tech LKM (No LKM needed)
Staging area on source

Source

Staging area

Integration phase Multi-tech IKM Single-tech IKM


Staging area on target

Target

Multi-tech IKM Single-tech IKM

(No LKM needed)

Source, staging area and target in same location

9-10

Importing a New Knowledge Module


1. Right click the project 2. Select Import > Import Knowledge Module 3. Choose the import directory ODI KMs are found in the impexp subdirectory 4. Select one or more knowledge modules
Hold CTRL/SHIFT for multiple selection
Browse for directory

5. Click OK

9-11

Description
A Knowledge Module is made of steps. Each step has a name and a template for the code to be generated. These steps are listed in the Details tab. The code that will be generated by ODI will list the same step names

9-12

Details of the Steps


Details of the steps are generic: the source and target tables are not known, only the technologies are known Substitution Methods are used as placeholders for the table names and column names Parameters of the substitution methods let you select which tables or columns are used in the KM

9-13

Options
KMs have options that will
Allow users to turn options on or off Let users specify or modify values used by the KM

Options are defined in the projects tree, under the KM Options are used in the KM code with the substitution method <%=odiRef.getOption(OptionName)%> On/Off options are defined in the Options tab of each step of the KM

9-14

Most Common Methods


getInfo
Returns general information on the current task.

getColList
Returns a list of columns and expressions. The result will depend on the current phase (Loading, integration, control).

getTargetTable
Returns general information on the current target column.

getTable
Returns the full name of the temporary or permanent tables handled by ODI.

getObjectName
Returns the full name of a physical object, including its catalog and schema.

9-15

getInfo Method
Syntax in a KM or Procedure
<%=snpRef.getInfo("pPropertyName")%>

Extract of the possible values pPropertyName:


SRC_CATALOG: Name of the data catalog in the source environment DEST_USER_NAME: Username of the destination connection CT_ERR_TYPE : Error type (F : Flow, S : Static). Example: The current source connection is: <%=odiRef.getInfo("SRC_CON_NAME")%> on server: <%=odiRef.getInfo("SRC_DSERV_NAME")%>

9-16

getColList Method
Values returned according to the phase:
Loading (in a KLM) To build loading tables To feed loading tables Integration (in a KIM) To build the integration table To feed the integration table Control (KCM) To build the integration table and feed it To control the constraints

9-17

getColList Method
Syntax
<%=snpRef.getColList("pStart","pPattern","pSeparator", "pEnd","pSe lector")%>

Where pStart is the string to insert before the pattern pPattern is the string used to identified the returned values Ex: [COL_NAME] returns a list of column names Several pPattern can be declared pSeparator is the character to insert between the returned patterns pEnd the string to insert at the end of the list pSelector is the string that defines a Boolean expression used to filter the elements of the initial list 9-18

getColList Examples
Retrieve a columns list and their data types (Loading phase):
<%=snpRef.getColList("(", "[COL_NAME] [SOURCE_CRE_DT] null", ",\n", ")", "")%> Returns for instance :

(CITY_ID numeric(10) null, CITY_NAME varchar(50) null, POPULATION numeric(10) null)

Retrieve the list of columns of the target to create the loading tables:
<%=snpRef.getColList("", "[CX_COL_NAME]\t[DEST_CRE_DT] " + snpRef.getInfo("DEST_DDL_NULL"), ",\n", "","")%>

9-19

getColList Examples
Retrieve the list of columns to be updated in the target (integration phase):
<%=snpRef.getColList("(", "[COL_NAME]", ",\n", ")", "INS OR UPD")%>

9-20

10

Modifying a KM
Very few KMs are ever created. They usually are extensions of modifications of existing KMs. To speed up development, duplicate existing steps and modify them. This will prevent typos in the syntax of the odiRef methods. If you modify a KM that is being used, all interfaces using that KM will inherit the new behavior. Remember to make a copy of the KM if you do not want to alter existing interfaces. Then modify the copy, not the original. Modifying a KM that is already used is a very efficient way to implement modifications in the data flow and affect all existing developments.

9-21

Lesson summary

Understand Understand KMs KMs

Modify // Create Modify Create KMs KMs

9-22

11

9-23

12

Oracle Data Integrator Changed Data Capture

10
10-1

Objectives
After completing this lesson, you will:
Understand why CDC can be needed Understand the CDC infrastructure in ODI What types of CDC implementations are possible with ODI How to setup CDC

10-2

Introduction
The purpose of Changed Data Capture is to allow applications to process changed data only Loads will only process changes since the last load The volume of data to be processed is dramatically reduced CDC is extremely useful for near real time implementations, synchronization, Master Data Management

10-3

CDC Techniques in General


Multiple techniques are available for CDC
Trigger based ODI will create and maintain triggers to keep track of the changes Logs based for some technologies, ODI can retrieve changes from the database logs. (Oracle, AS/400) Timestamp based If the data is time stamped, processes written with ODI can filter the data comparing the time stamp value with the last load time. This approach is limited as it cannot process deletes. The data model must have been designed properly. Sequence number if the records are numbered in sequence, ODI can filter the data based on the last value loaded. This approach is limited as it cannot process updates and deletes. The data model must have been designed properly.

10-4

CDC in ODI
CDC in ODI is implemented through a family of KMs: the Journalization KMs These KMs are chosen and set in the model Once the journals are in place, the developer can choose from the interface whether he will use the full data set or only the changed data

10-5

CDC Infrastructure in ODI


CDC in ODI relies on a Journal table This table is created by the KM and loaded by specific steps implemented by the KM This table has a very simple structure:
Primary key of the table being checked for changes Timestamp to keep the change date A flag to allow for a logical lock of the records

A series of views is created to join this table with the actual data When other KMs will need to select data, they will know to use the views instead of the tables
10-6

CDC Strategies and Infrastructure


Triggers will directly update the journal table with the changes. Log based CDC will load the journal table when the changed data are loaded to the target system:
Update the journal table Use the views to extract from the data tables Proceed as usual

10-7

Simple CDC Limitations


One issue with CDC is that as changed data gets processed, more changes occur in the source environment As such, data transferred to the target environment my be missing references Example: process changes for orders and order lines
Load all the new orders in the target (11,000 to 25,000) While we load these, 2 new orders come in: 25,001, 25,002. The last two orders are not processed as part of this load, they will be processed with the next load. Then load the order lines: by default, all order lines are loaded including order lines for orders 25,001 and 25,002 The order lines for 25,001 and 25,002 are rejected by the target database (invalid foreign keys)

10-8

Consistent CDC
The mechanisms put in place by Consistent CDC will solve the issues faced with simple CDC The difference here will be to lock children records before processing the parent records As new parent records and children records come in, both parent and children records are ignored

10-9

Consistent CDC: Infrastructure


Processing Consistent Set CDC consists in the next 4 phases:
Extend Window: Compute the consistent parent/child sets and assign a sequence number to these sets. Lock Subscriber: for the application processing the changes, record the boundaries of records to be processed (between sequence number xxx and sequence number yyy). Note that changes keep happening in the source environment, other subscribers can be extending the window while we are processing the data. After processing the changes, unlock the subscriber (i.e. record the value of the last sequence number processed). Purge the journal: remove from the journal all the records that have been processed by all subscribers.

Note: all these steps can either be implemented in the Knowledge Modules or done separately, as part of the Workflow management.

10-10

Using CDC
Set a JKM in your model For all the following steps, right-click on a table to process just that table, or right-click on the model to process all tables of the model:
Add the table to the CDC infrastructure: Right-click on a table and select Changed Data Capture / Add to CDC For consistent CDC, arrange the datastores in the appropriate order (parent/child relationship): in the model definition, select the Journalized tables tab and click the Reorganize button Add the subscriber (The default subscriber is SUNOPSIS) Rightclick on a table and select Changed Data Capture / Add subscribers Start the journals: Right-click on a table and select Changed Data Capture / Start Journal

10-11

View Data / Changed Data


Data and changed data can be viewed from the model and from the interfaces In the model, right click on the table name and select Data to view the data or Changed Data Capture / Journal Data to view the changes From the interface, click on the caption of the journalized source table and select or unselect Journalized data only to view only the changes or all the data.

10-12

Using Journalized Tables


Keep in mind that only one journalized table can be used per interface If you were to use two journalized tables, there is a very highly likelihood that the data sets will be disjoined. No data would be loaded as a result.

10-13

Lesson summary
Implement Implement CDC CDC

Why CDC? Why CDC?

Types of CDC Types of CDC implementations implementations

CDC CDC Infrastructure Infrastructure

10-14

10-15

Oracle Data Integrator Workflow Management: The Packages

11
11-1

Objectives
In this lesson, you will:
Learn how ODI Packages are used to create a complete workflow. See how to create several different kinds of package steps. Learn how to execute a package.

11-2

What Is a Package?

Package: An organized sequence of steps that makes up a workflow. Each step performs a small task, and they are combined together to make the package.

11-3

How to Create a Package

1. 2.

Create and name a blank package Create the steps that make up the package
Drag interfaces from the Projects view onto the Diagram tab Insert ODI tools from the toolbox Define the first step Define the success path Set up error handling

3.

Arrange the steps in order


11-4

The Package Diagram


Toolbar

Diagram

ODI tool step Toolbox for ODI tools

Interface step (selected)

Properties of selected step

11-5

Package Diagram Toolbar


Execute package Execute selected step Edit selected step Hide/show toolbox Hide/show properties Hide/show success links Hide/show failure links Print package Page setup

Select Next step on success Next step on failure Duplicate selection Delete selection Rearrange selection

Shows errors in the diagram

11-6

How to Create an Interface Step


1.Expand the project and folder containing the interface. Expand the Interfaces node. 2.Drag the interface to the package diagram.
The new step appears.

3.Optionally, change the Step Name in the Properties panel.

11-7

Note Interfaces Are Reusable


be e s c an c Interfa m an y eused r i n t he or times ckage pa s am e nt differe in ges. pack a

11-8

Note Interfaces Are Reusable


ot d ce is n interfa ut reference The d, b ate . duplic kages he pac by t e e in th e s mad hange will affect th s C ce kage interfa n of all pac tio execu it. sed using be reu es can e same c h Interfa es in t erent ny tim ma in diff ge or packa ges. packa

11-9

What Is an ODI Tool?

ODI tools Macros that provide useful functions to handle files, send emails, use web services, etc. Tools can be used as steps in packages.

11-10

How to Create an ODI Tool Step


1. In the Toolbox, expand the group containing the tool you want to add. Click the tool. Click the diagram.
A step named after the tool appears.

2. 3.

4. 5. 6.

Change the Step Name in the Properties panel. Set the tools Properties. Click Apply to save.

11-11

Note Tool Steps Are Not Reusable


annot teps c T ool s t can ed, bu s be r eu . ic a t e d e dup l b ate a To cre e of nc se qu e t ool u sa b l e re ds yo u mman e a co reat must c re. u P r oced
11-12

Note Other Step Types


e pes ar step ty s Other such a lable, avai les, variab res, u proced s, or r io scena a. at metad e using l only b tools W e w il s and ce e interfa n of th sectio in this g. trainin
11-13

A Simple Package
First step Step on success

The first step must be defined


Right click > First Step

After each step the flow splits in two directions:


Success: ok (return code 0) Failure: ko (return code not 0)

Step on failure

This package executes two interfaces then archives some files. If one of the three steps fails, an email is sent to the administrator.

11-14

Note Error Button


at are g es t h P ack a t l y c incorre ed appear nc se qu e r e Erro with th ighlighted h but t on o l bar . to i n t he e the i t to s e Click . det ai l s
11-15

Executing a Package
1. Click the Execute button in the package window

2.

Open Operator
The package is executed as a session Each package step is a step Tool steps appear with a single task Interface steps show each command as a separate task

11-16

Note Atomic Testing


teps T es t s irst! ually f i ndi vi d le to p ossi b It is single cut e a e exe m th t ep fr o s m. diagra

11-17

Lesson Summary
Executing a Executing a package and package and viewing the log viewing the log

Sequencing Sequencing steps with error steps with error handling handling

Creating a Creating a package package

Creating interface Creating interface steps steps

Creating tool steps Creating tool steps

11-18

11-19

10

Oracle Data Integrator Metadata Navigator

12
12-1

Objectives
After completing this lesson, you will:
Understand Metadata Navigator Know How to use Metadata Navigator Be able to explain the features in Metadata Navigator

12-2

Purpose
Medatadata Navigator will give access to the metadata repository from a Web interface It is a read-only interface (see Lightweight Designer for an interactive interface) It can build graphical flow maps and data lineage based on the metadata

12-3

Login to Metadata Navigator


The same username and passwords can be used to login into MN, as long as the user has enough privileges
These privileges are set in the security interface

12-4

Overview
By default, Metadata Navigator will show the projects available in the repository. Users menu will be customized based on their privileges

12-5

Repository Objects
All objects in the repositories can be viewed. Hyperlinks let you jump from one object to the other.

12-6

Data Lineage
For data lineage, MN will list the source datastores and target datastores for any element. You can click on any icon in the graph to get further lineage

12-7

Details on the Data Lineage


The option Show Interfaces in the Lineage will show all the interfaces where the datastores are used as source or targets

12-8

Details of an Interface in the Lineage


If you click on an interface, you can see the detailed mappings

12-9

Flow Maps
Flow maps will show the dependencies between models (or datastores) and projects (or interfaces) You can choose the level of details that you want

12-10

Flow Map Details


This flow map shows that all TRG_* tables are used as targets in the Oracle Target project. It also shows that TRG_CITY and TRG_CUSTOMER are also used as sources in that same project.

12-11

Execution of a Scenario

Select the values from the drop-down menus Set the value for the parameters Execute!

12-12

Lesson summary
How to Use How to Use Metadata Navigator Metadata Navigator

What is Metadata What is Metadata Navigator Navigator

How to Describe How to Describe Metadata Navigator Metadata Navigator

12-13

12-14

Oracle Data Integrator Web Services

13
13-1

Objectives
After completing this lesson, you will:
Understand why Web Services? Understand the different types of Web Services Know how to setup these web services

13-2

Environment
In this presentation, Apache Tomcat 5.5 or Oracle Container for Java (OC4J) are used as the application server, with Apache Axis2 as the Web Services container. Examples may need to be adapted if using other Web Services containers.

13-3

Types of Web Services


The Oracle Data Integrator Public Web Services are web services that enable users to leverage Oracle Data Integrator features in a service-oriented architecture (SOA). It provides operations such as starting a scenario. Data Services are specialized Web Services that provide access to data in datastores, and to captured changes in these datastores. These Web Services are automatically generated by Oracle Data Integrator and deployed to a Web Services container - normally a Java application server.

13-4

Public Web Services


To install the Oracle Data Integrator Public Web Services on Axis2:
In Axis2, go to the Administration page. Select the Upload Service link Browse for the Oracle Data Integrator Web Services .aar file. It is located in the /tools/web_services/ sub-directory in the Oracle Data Integrator installation directory. Click the Upload button. Axis2 uploads the Oracle Data Integrator Web Services. You can now see Data Integrator Public Web Services in the Axis2 services list.

13-5

Usage For Public Web Services


Add Bulk Data Transformation to BPEL Process
Oracle SOA Suite
BPEL Process Manager
Business Activity Monitoring Web Services Manager Rules Engine Enterprise Service Bus

Oracle SOA Suite:


BPEL Process Manager for Business Process Orchestration

Oracle Data Integrator


E-LT Agent E-LT Metadata

Oracle Data Integrator:


Efficient Bulk Data Processing as Part of Business Process Interact via Data Services and Transformation Services

Bulk Data Processing

13-6

Data Services: Environment Setup


ODI will let you generate and deploy web services directly from the designer interface Carefully setup your environment to enable this feature:
Topology must be properly setup (definition of the iAxis server) META-INF/context.xml and WEB-INF/web.xml must be updated in the iAxis directories (see the next slides) The database drivers must be installed in the appropriate directory Use /common/lib for Tomcat. Use ORACLE_HOME/j2ee/home/applib for OC4J.

Restart your server to take these changes into account

13-7

Context.xml
Add the following entry in the file
Resource name will be re-used in the web.xml file and in the Model in Designer driverClassName, url, username and password will explicitely point to the data source <Context > <Resource name="jdbc/Oracle/Win" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@SRV1:1521:ORA10" username=mydatabaseuser" password=mydatabasepassword" maxIdle="2" maxWait="-1" maxActive="4"/> </Context>

13-8

OC4J
Update the file <oc4j_home>\j2ee\home\config\data-sources.xml with the appropriate connection information:
<!-- The following is an example of a data source whose connection factory emulates XA behavior. --> <managed-data-source name="OracleDS" connection-pool-name="Example Connection Pool" jndi-name="jdbc/OracleDS"/> <connection-pool name="Example Connection Pool"> <connection-factory factoryclass="oracle.jdbc.pool.OracleDataSource" user="system" password="system" url="jdbc:oracle:thin:@//localhost:1521/XE"> </connection-factory> </connection-pool>

13-9

Tomcat
Add the following entry in the context.xml file
Resource name will be re-used in the web.xml file and in the Model in Designer driverClassName, url, username and password will explicitely point to the data source

Update the web.xml file with the resource name of the context.xml file (here res-ref-name)
<resource-ref> <description>Data Integrator Data Services on Oracle_SRV1</description> <res-ref-name>jdbc/Oracle/Win</res-refname> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref>

<Context > <Resource name="jdbc/Oracle/Win" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@SRV1:1521:ORA10" username=mydatabaseuser" password=mydatabasepassword" maxIdle="2" maxWait="-1" maxActive="4"/> </Context>

13-10

Topology for Data Services


One entry per Web Container Make sure that you define a logical schema for the server a well Note: this entry defined the access to the container, not to the data!

13-11

Data Services: Model Setup


Enter the appropriate information in the Service tab of the model
Select the logical schema name for the web service container Set the values for the service name note that the name of the data source must be consistent with the entries in: data-sources.xml forOC4J context.xml and web.xml, for Tomcat (prefixed with: java:/comp/env/ for Tomcat)

Select the appropriate SKM for the operation

13-12

Generate and Deploy


Select the datastores to be deployed Click Generate and deploy Select the actions you want to perform (all by default) Your web services are ready to be used

13-13

Checking for Web Services


List the services on axis2: http://myserver:8080/axis2/servic es should list your service Right-click on the tables that you have exposed as web services
select Test Web Services a list of ports will be available: one port per method available on the table

Select a method, enter the mandatory parameters, click ( ) to test the web service. The results of the call will be displayed in a grid

13-14

Changed Data Through Web Services


If you enable CDC on the table, simply re-generate and deploy the service to see the new methods that are available
getChangedData consumeChangedData

To modify the behavior of the web services, you can edit the SKM like any other Knowledge Module

13-15

Lesson summary
Different Types of Different Types of Web Services Web Services Setup Data Setup Data Web Services Web Services

Why Web Why Web Services? Services?

Setup Setup Transformations Transformations Web Services Web Services

13-16

13-17

Oracle Data Integrator User Functions, Variables and Advanced Mappings

14
14-1

Objectives
After completing this lesson, you will know how to use:
Variables Sequences User Functions Advanced Mappings

14-2

Variables

14-3

What Is a Variable?

Variable An ODI object which stores a typed value, such as a number, string or date. Variables are used to customize transformations as well as to implement control structures, such as if-then statements and loops, into packages.

14-4

Variable Scope
Scope defines where the variable can be used.
Global variables are accessible everywhere (defined in the Others tab of the Designer tree). Project variables are accessible within the project (and defined under the project in the Designer tree).

The scope is defined by the location of the variable:


Global variables: in the Others view, under Global variables Project variables: in the Projects view, under Variables

To refer to a variable, prefix its name according to its scope:


Global variables: GLOBAL.<variable_name> Project variables: <project_code>.<variable_name>

14-5

Note Variables at Run Time


runle s a r e V ar i ab j ect s. i me o b t sed pr oce s re T hey a heir t im e . T at r un r e n ot l ues a va ne in ed inli y lo g . displa cut i on e t he ex

14-6

Defining Variables

14-7

Variable Configuration: The Definition


Historize: keep all values taken by the Variable Last value: only keep the most recent value Not persistent: discard the value Data type: Alphanumeric (250 chars) Date (Java format) Numeric (10 digits).

Value given if no stored value

Description of variable

14-8

Variable Configuration: The Query


Logical Schema used for refreshing

SQL Refresh instruction

The result of a SQL query can be stored into a variable:


The query must return a single value of the correct type. The query is executed by a Refresh Variable package step. Can also refresh manually.

14-9

Using Variables

14-10

Using a Variable in The Mappings


When used, a variable is prefixed according to its scope:
Global variable: GLOBAL.<variable_name> Project variable: <project_code>.<variable_name>

Tip: Use the Expression Editor to avoid mistakes in variables names. Variables are used either by string substitution or by parameter binding. Substitution: #<project_code>.<variable_name> Binding: :<project_code>.<variable_name>

14-11

Note Dont Forget the Quotes!


using W he n n, the titutio nce i s su bs refere le va l u e . variab by its ed r epl ac ou pu t sure y M a ke le in variab t he w he n quotes . d r equi r e
14-12

How to Create a Variable Step in a Package


1. Select the variable from the Projects or Others view. Drag and drop it into the package. Select the type of operation to perform: evaluate, set/increment, refresh or declare. Set the options for the operation to perform.

2. 3.

4.

14-13

Variable Steps
Declare Variable step type:
Forces a variable to be taken into account. Use this step for variables used in transformations, or in the topology.

Set Variable step type:


Assigns a value to a variable or increments the numeric variable of the value.

14-14

Variable Steps (cont.)


Refresh Variable step type:
Refreshes the value of the variable by executing the defined SQL query.

Evaluate Variable step type:


Compares the variable value with a given value, according to an operator. You can use a variable in the value in the value

14-15

ODI Sequences vs Database Sequences

14-16

Note Sequences Updated by Agent


s ence i I sequ by the A OD ented increm ch time it ea Agent a row cesses this pro d by affecte e. nc seque ping is a map ed W hen rocess irely p MS, any ODI ent a DB within es in this nc seque not ion is xpress ted. e en increm
14-17

Note
es are que nc ODI se st as DBMS fa not as es . nc se qu e nc es sequ e DBMS e used e. oul d b sh s sib l ver po w h er e

14-18

User Functions

14-19

What Is a User Function?

User Function A cross-technology macro defined in a lightweight syntax used to create an alias for a recurrent piece of code or encapsulate a customized transformation.

14-20

10

Simple Example #1
A simple formula:
If <param1> is null then <param2> else <param1> end if

Can be implemented differently in different technologies:


Oracle
nvl(<param1>, <param2>)

Other technologies:
case when <param1> is null then <param2> else <param1> end

And could be aliased to:


NumberToTitle(<param1>, <param2>)

14-21

Simple Example #2
A commonly used formula:
If <param1> = 1 then Mr else if <param1> = 2 then Ms else if <param1> = 3 then Mrs else if <param2> = 77 then Dr else if <Param2> = 78 then Prof else end if

Could be aliased to:


NumberToTitle(<param1>, <param2>)

14-22

11

Properties of User Functions


A user function always has:
A syntax: defines how the function is called by other objects in ODI Several implementations specific to different technologies A scope Global functions can be used anywhere. Project functions can be used within their project.

User functions are organized into groups.

14-23

Note Functions in Execution Log


e o ns ar Functi me objects. -t i des i gn ut i o n e exec r I n th t hei g, only ation in lo ent implem age of the gu t h e l a n g y w il l lo t echno r s. app ea
14-24

12

How to Create a User Function


1. 2. 3.

Select the Functions node in a project, or in the Others view Right-click > Insert Function Fill in the:
Name Syntax Description

4.

Select the Group, or type in the name of a new group.

14-25

Defining the Implementations

1. 2. 3. 4.

Select the Implementations tab Click the Add button. Enter the code for the implementation Select the applicable technologies.

14-26

13

Syntax of the Implementations


The function's syntax and implementation arguments are specified in one of the following formats:
$(<arg_name>) $(<arg_name>)<arg_type>

If <arg_type> is supplied it must be s (string), n (numeric) or d (date). For example: $(title)s

Argument names in the syntax and implementation should match exactly. Examples:
Syntax:
NullValue($(myvariable), $(mydefault))

Implementation:
case when $(myvariable) is null then $(mydefault) else $(myvariable) end

14-27

Using User Functions

At design-time, you can refer to user functions like any regular database function
You can use them in mappings, joins, filters, procedures, etc. They are available in the expression editor.

When the code is generated:


Project functions are identified first, then global functions. If a function is recognized, it is turned into the implementation corresponding to the technology of the generated code. If the function is not recognized, the generated code remains unchanged.

14-28

14

Advanced Mappings

14-29

Using Substitution Methods

Methods are used to write generic code.


Table names in a context dependant format Information about the session Source and target metadata info.

Use the method with the following syntax:


<%=snpRef.method_name(parameters)%>

Refer to the Substitution Methods reference manual.

14-30

15

Examples of Substitution Methods


Mapping a target column to its default value defined in ODI:
'<%=snpRef.getColDefaultValue()%>'

Mapping a column using the system date:


'The year is: <%=snpRef.getSysDate("yyyy")%>'

Writing a generic select subquery in a filter:


ORDER_DATE >= (SELECT MAX(ORDER_DATE)-7 FROM <%=snpRef.getObjectName("SRC_ORDERS")%> )

14-31

Lesson summary
Defining and Defining and using sequences using sequences

Defining and Defining and using variable using variable

Defining and Defining and using User using User Functions Functions

Using substitution Using substitution methods methods

14-32

16

14-33

17

Oracle Data Integrator Procedures, Advanced Workflows

15
15-1

Objectives
After completing this lesson, you will know how to:
Create simple reusable procedures. Add commands. Provide options on your commands. Run your procedures. Use the procedure into a package

15-2

What is a procedure?

Procedure A sequence of commands executed by database engines, the operating system, or using ODI Tools. A procedure can define options that control its behavior. Procedures are reusable components that can be inserted into packages.

15-3

Procedure Examples
Email Administrator procedure
1. Uses the SnpsSendMail ODI tool to send an administrative email to a user. The email address is an option. Deletes the contents of the /temp directory using the SnpsFileDelete tool. Runs DELETE statements on these tables in order: CUSTOMER, CITY, REGION, COUNTRY.

Clean Environment procedure


1. 2.

15-4

Procedure Examples (cont.)


Initialize Drive procedure
1. 2. Mount a network drive using an OS command (depending on an option). Create a /work directory on this drive. Wait for 10 rows to be inserted into the INCOMING table. Transfer all the data from INCOMING to the OUTGOING table. Dump the content of the OUTGOING table to a text file. Email this text file to a user.

Identify Changes,then send an email procedure


1. 2. 3. 4.

15-5

How to Create a New Procedure

1. 2. 3.

Right-click the Procedures node under a project. Select Insert Procedure Fill in the
Name Description

4.

Optionally, define the default:


Source Technology Target Technology

15-6

Creating a New Command


1. 2. 3. 4. 5. Select the procedures Details tab. Click the Add Command button Fill in the Name Set Ignore Errors as appropriate. For the Command on Target, select:
Technology Context Logical Schema Command code (using the Expression Editor)

6. 7.

Repeat step 5 for the Command on Source (optional). Click OK.

15-7

Arranging Steps in Order


The Details tab shows the steps of your procedure. Steps are executed top to bottom
In this example, Wait for data in INCOMING is executed last. We need it to be executed first

To rearrange steps, use the up and down buttons


Now the procedure will wait for data before attempting the transfer.

Make sure the order of your steps is correct.

15-8

Which Parameters Should Be Set?


The following parameters should be set in the command:
Technology: If different from the one defined at technology level. Logical Schema: For DBMS technologies (Jython, OS, ODI Tools do not require a schema) Context: If you want to ignore the execution context. Ignore Errors: If the command must not stop the procedure. A warning is issued only if the command fails.

15-9

Valid Types of Commands


Some examples of the types of commands that can be used in ODI procedures:
SQL statement OS commands ODI Tools Jython programs Executed on any DBMS technology DELETE, INSERT, SELECT, statements.
Executed on the Operating System technology In OS-specific syntax using shell commands or binary programs Executed on the Sunopsis API technology Any ODI tool command call. Executed on the Jython technology. Jython code interpreted by the agent (Extensibility Framework).

15-10

More Elements
In addition, we have access to the following ODI-specific elements that can be used within the commands:
Variables Sequences User Functions They may be specified either in substitution mode #<variable>, or in bind mode :<variable> They may be specified either in substitution mode #<sequence>, or in bind mode :<sequence> Used like DBMS functions. They are replaced at code generation time by their implementation.

15-11

Why Use a Source Command?


The command sent to the source should return a result set which is manipulated by the default command. Example: Transferring data from a table to another.
Source Command: SELECT statement Default Command: INSERT statement with source columns in bind mode

15-12

Types of Options
Options are typed procedure parameters
Checkbox, value or text

Options are used to control procedures:


Commands may or may not be executed depending on the values of checkbox options. Value or text options can be used within the command to parameterize the code.

Options have a default value. The value can also be specified when the procedure is used.

15-13

How to Create a New Option

1. 2. 3.

Right-click the name of the procedure. Select Insert Option. Fill in the
Name Description Help

4.

Select a
Default Value Type Position to display in

15-14

How to Make a Command Optional

1. 2. 3.

Open a command, then select the Options tab. Check the option box if it should trigger the command execution. If the command should run, independently of any options, check Always Execute option.

15-15

Using an Option Value in a Command

In the command code, add the following code:


<%=snpRef.getOption("option_name")%>

This code restitutes the value of the option at runtime.


Use quotation marks as appropriate.

The value of the option is specified when the procedure is used within a package. Otherwise, the default value is used.

15-16

Procedure Execution
A procedure can be executed manually for testing.
Default option values are used.

It is usually run from a package.


Add a procedure step by dragging and dropping. Option values can be overridden at that time.

15-17

Using Operator to View the Results

Procedure is executed as a session with one step One task for each command
Warning: error was ignored Tasks completed successfully Error that was not ignored Task waiting to be run.

15-18

Note Procedure Steps Are References


n use d i edure A pr oc a ge i s a pa ck ed by t he nc r ef er e e, no t ac kag p . cop i ed t o t he ang es Ch d ur e l pr oce e origina ly to th pp al s o a g e. pac ka
15-19

Packages Procedures and Advanced Workflows

15-20

10

Advanced Step Types


You may not be familiar with the following step types. Drag and drop an object into the package to create them. This creates references, not copies.
Model steps Sub-model steps Datastore steps Variable steps Reverse-engineering, static control or journalizing operations. Static control operations. Static control or journalizing operations. Declare, set, increment, refresh or evaluate the value of a variable.

15-21

How to Create a Procedure Step

1. 2. 3.

4.

Under a project, select the procedure which you want to work with. Drag and drop it into the package. Set the step Name. There is only one type of procedure step. Override any options on the Options tab.

15-22

11

How to Create Model, Sub-Model and Datastore Steps

1.

2. 3. 4.

Select the model, sub-model or datastore which you want to work with in the Models view. Drag and drop it into the package. Select the type of operation to perform. Set the Options for the chosen operation

15-23

Models, Sub-models and Datastore Steps


Datastore Sub-model Model

Reverse-Engineering type:
The reverse method defined for the model is used. If using customized reverseengineering, you must set the RKM options on the Options tab.

Reverse-engineer Journalize

Check type:

The static control strategy (CKM) set for the Check model and the datastores are used. Options for the CKM are set on the Options tab. Select Delete Errors from the Checked Tables to remove erroneous records.

15-24

12

Models, Sub-models and Datastore Steps (cont.)


Datastore Sub-model Model

Journalize step type:


The journalizing type and strategy (JKM) set for the model is used in the step. The journalizing mode (simple/consistent set) determines which options are available. JKM-specific options are set on the Options tab.
Reverse-engineer Journalize Check

15-25

Note Beware of the Model Steps


d e l, nt ! M o a Import el and od s in a su b- m e st ep or o d if y dat as t ca n m ge s pac ka el or i t e m od i on at r un th ur at config e them with s time. U ution. a gr eat c

15-26

13

Controlling Execution

Each step may have two possible next steps:


Next step upon success Next step upon failure

If no next step is specified, the package stops. Execution can branch:


as the result of a step (success/failure) because of an Evaluate Variable step

Examples of control structures follow.

15-27

Error Handling
Interfaces fail if a fatal error occurs or if the number of allowed errors is exceeded. Procedures and other steps fail if a fatal error occurs Try to take into account possible errors.

Simple error handling

15-28

14

How to Create a Loop


Loops need a counter variable

3.
Package Loop

1.

2. 4.

1. 2. 3. 4.

Set the counter to an initial value Execute the step or steps to be repeated Increment the counter Evaluate counter value and loop to step 2 if the goal has been reached

15-29

The Advanced Tab

Each steps Advanced tab allows you to specify the how the next step is determined.
You can specify a number of automatic retries on failure.

Where to go next if the step completes successfully How many times this step should be re-attempted if it fails Where to go next if this step fails

List of all possible package steps. Choose one to be executed next if this one succeeds. Time interval (in seconds) between each attempt List of all possible package steps. Choose the one to be executed next if this one fails.

Specifies if step execution report is to be conserved in the Log.

15-30

15

Lesson summary
Creating Creating procedures with procedures with options options

Running a Running a procedure procedure

Complex Complex workflows with workflows with branching and branching and loops loops
15-31

Procedure steps Procedure steps

15-32

16

Oracle Data Integrator Installation

16
16-1

Objectives
After completing this course, you will know:
How to create the Master Repository. How to create the Work repository. Know how to connect to the repositories.

16-2

Installation Process
Once you have downloaded ODI, the installation will require the following steps:
Install or Unzip ODI on your computer. Please note that if you simply unzip ODI, it requires a Java Virtual Machine version 1.5 or above Create 2 databases or schemas to host your repositories Use the provided Wizard to create the Master Repository Connect to the Master Repository with the Topology interface Create the Work repository from the Topology interface Connect to the Work Repository with the Designer interface

16-3

Documentation of the Process


The installation process is entirely documented. You can follow step by step instruction in the users manual

16-4

Requirements for the Repositories

The users used to connect must have the following privileges: Connect Resource The users default database must be the ones that you create for the repositories

16-5

The Master Repository Creation Wizard


1. Start the Wizard (from the Windows shortcuts or from ODI/bin/repcreate.bat or ODI/bin/repcreate.sh) 2. Enter the connection parameters for the Master Repository 3. Click Test to validate the connection parameters 4. Select your Technology 5. Ok starts the creation of the repository

16-6

Tip!
e itor th ext ed nat Save i nnection co river, JDBC ters (D e param . r U RL) te thei le crea eop d Work Most p ster an e same Ma th initial ries in ll have posito Re ou wi ase. Y same datab r those imes. to ente everal t s values
16-7

The First Connection to Topology


1. Start the Topology interface 2. When prompted for your username and password, create a new profile ( ) 3. Enter the connection parameters (copy them if you saved them in a text editor) 4. Click Test to validate the connection parameters 5. Make this your default connection

16-8

Create The Work Repository


In Topology:
1. Select the Repositories tab ( ) 2. Right-click on Work Repositories and select Insert Work Repository

16-9

Set the Server Parameters


1. Enter the username and password for the Work Repository connection 2. Select the technology 3. Click on the JDBC tab 4. Enter the JDBC connection parameters (Copy them if you saved them)

16-10

Name the Work Repository


1. Choose an ID for the repository (This ID must be unique in your environment) 2. Select the type of repository
Production repositories are typically execution. Others are typically Development

3. Name your repository 4. Click Ok ODI will create the repository

16-11

The First Connection to Designer


1. Start the Designer interface 2. When prompted for your username and password, create a new profile ( ) 3. Enter the connection parameters for the Master Repository (copy them if you saved them in a text editor) 4. Select the Work Repository name from the list (click on the button) 5. Click Test to validate the connection parameters 6. Make this your default connection

16-12

Lesson summary

Create the Create the Master Master Repository Repository Create the Work Create the Work Repository Repository

Connect to the Connect to the repositories with repositories with the GUI the GUI

16-13

16-14

Oracle Data Integrator Agents

17
8-1

Understanding Agents

8-2

Local (No Agent) vs. Agent


The GUI acts as a local agent
Pros: No need to start an agent All in one component Cons Not taking advantage of the distributed architecture When you stop the GUI, you loose the Agent

8-3

Purpose of the Agent


The agent is the component that orchestrates the processes. The Agent is the component that will
Finish the code generation (based on the context selected for the execution of the scenarios) Establish the connection with the different databases Send the generated code to the different databases Retrieve the return code from the databases or operating system Update the repository with the return codes and processing statistics

8-4

Different Types of Agents


You can start the agent in different modes that will have different behaviors
Listener Only: this agent is started with the bin\agent.bat script (or agent.sh on Unix) Listener and Scheduler: this agent is started with the bin\agentscheduler.bat (or agent.sh on Unix). Note: Scheduler agents will have to connect to the repository to retrieve the schedule. Check out later in this presentation how to update the snpparams.bat (or .sh) file to establish this connection

8-5

Agents Installation and Configuration

8-6

Agent Location
The agent has to be in a central location so that it can access
All databases (source and target) All database utilities (to load/unload for large volumes of data) The ODI Repository (Master and Work Repository)

Typical location is on the target server It is possible to install several agents:


either on the same machine (different ports) or on different machines

8-7

Agent Installation
Agents can be installed with the graphical setup Agents can be installed manually: simply copy over the \bin, \drivers and \lib directories Once installed, an agent can be set as a service on a Windows machine, run the script agentservice.bat.
Installation as a listener only: Agentservice i a AgentName AgentPort Installation as a listener and scheduler: Agentservice i s AgentName AgentPort AgentName is only mandatory for scheduler agents or agents used for load balancing AgentPort is only mandatory if the port is different from the default (20910)

8-8

Agent Configuration
For scheduler agents only, you will have to update the file odiparams.bat (or .sh) in the bin directory
Update the parameters to connect to the Master repository Encrypt passwords with the following command (from a DOS or Unix prompt: Agent encode MyPassword (Replace MyPassword with your password)

Define the agents in Topology (see following section)

8-9

Sample ODIPARAMS file


JDBC driver to access the Master Repository:
set ODI_SECU_DRIVER=oracle.jdbc.driver.OracleDriver

JDBC URL to access the Master Repository:


set ODI_SECU_URL=jdbc:oracle:thin:@1.128.5.52:1521:ORADB

Database Username to access the Master Repository:


set ODI_SECU_USER=odi1013m

Database Password to access the Master Repository:


set ODI_SECU_ENCODED_PASS=b9yHYSNunqZvoreC6aoF0Vhef

Name of the Work Repository:


set ODI_SECU_WORK_REP=WORKREP1

ODI Username for ODI Security:


set ODI_USER=SUPERVISOR

ODI Password for ODI Security:


set ODI_ENCODED_PASS=LELKIELGLJMDLKMGHEHJDBGBGFDGGH

8-10

Agents Definition in ODI Topology

8-11

Creating a Physical Agent


1. 2. 3. Right-click Agents in the Physical Architecture view Select Insert Agent Fill in the agents parameters:
1. 2. 3. Agent Name (MACHINE_PORT) Host Name or IP address Agent Port

4. 5.

Click the Test button if the agent is already running Optional load balancing:
Set the Maximum number of sessions. Define the linked agents on the Load Balancing tab.

8-12

Important Note
st the ays te heck A lw on to c cti conn e ent is the ag that tly correc ed. ur config

8-13

Important Note
i cal e phys gent On per a agent d. starte

8-14

Creating a Logical Agent


1. 2. 3. 4. 5. Go to the Logical Architecture view Right-click the Agents node. Click Insert Logical Agent Fill in the agent Name Click OK. You can associate this logical agent with physical agents here.

8-15

Lesson summary

Understanding Understanding Agent Agent Agents Agents Configuration Configuration

Agent Installatoin Agent Installatoin

8-16

8-17

Oracle Data Profiling

18

Co Developed with Trillium


Oracle Data Profiling is a rebranding of the Trillium product. It is composed of a client interface and a server (The Metabase) Data are typically loaded into the server for profiling:
All of the data Or a sample of the data

Tutorial On OTN
PM has put together a tutorial that will take you through the profiling operations and the different features of the product. The tutorial comes with sample data to be profiled and cleansed. We recommend that you do the tutorial to familiarize yourselves with the product.

Purpose of Data Profiling


Data investigation Quality monitoring tool Assess the quality of data through metrics, Discover or infer rules based on this data, Monitor the evolution of data quality over time.

Requirements
Connections for Profiling are defined in the Metabase and are made of:
A loader connection that defines where the resources are located Several entities that are loaded in the metabase so that the profiling operations can be performed Entities are flat files or ODBC connections Not all records have to be loaded. All rows First x rows Random x% of the rows Skip the first x rows Dynamic (data not loaded in the metabase)

Loading the Metabase


When you define an Entity, you are given the option to load the metabase immediately (run now) or later (run later) The background tasks icon ( ) in the toolbar can be used to check on the status of the running jobs

Profiling Project
The first step to profile data will be to create a profiling project in which you include the entities to be profiled. This will allow you to analyze dependencies between the different entities of the project

Metadata and Entities


Within a project, you can explore Metadata, or attributes of your entities, as well as permanent joins between entities Metadata will contain statistics about the metadata
Number of rows Min/Max size for the records Keys discovered Joins discovered Business rules

Attributes will contain the result of the analysis of the different fields that have been profiled
Compliance with pre-defined rules Unicity level for the column Patterns (Phone numbers, SSN, etc. ) Min/Max values, Min/Max length, etc.

Metadata

Double-click on any element to drill down in the details and ultimately view the records

Attributes

Double-click on any element to drill down in the details and ultimately view the records

Adding to the Discovered Metadata


You can define your own keys, joins and business rules These can then be used to re-analyze the data and see which records comply or dont.

Adding a Key or a Reference

Create key (or dependency) and double click on it to see duplicate (or orphan) records

Key Analysis

Features for Dependencies


Venn Diagrams

Entity Relationship Diagrams

Data Compliance Check


You can edit any attribute in a column (right-click Edit DSD) to add specific compliance check:
Non nullable columns Pattern Check Values Check Range Check Length Check Uniqueness Check

Once new rules have been entered, data have to be re-analyzed.

DSD Examples

Oracle Data Quality

19

Co Developed with Trillium


Oracle Data Quality is a rebranding of the Trillium product. It is composed of a client interface and a server (The Metabase) Data are typically loaded into the server for profiling:
All of the data Or a sample of the data

Tutorial On OTN
PM has put together a tutorial that will take you through the profiling operations and the different features of the product. The tutorial comes with sample data to be profiled and cleansed. We recommend that you do the tutorial to familiarize yourselves with the product.

Purpose of Data Quality


Cleanse Data (name and address cleansing) Business rules can be added for business data cleansing (not out of the box) Match and merge inconsistent entries in the database Validate data entries (addresses in particular)

Strength of the Solution


Oracle Data Quality is very strong for its ability to:
Process data from separate countries Provide strong quality dictionaries for data cleansing (available from Trillium)

Creating a New DQ Project


When you start a project, select:
The type of project you want to work on The entity on which you want to work The countries that will be covered

ODP will automatically generate the required steps. You will customize these steps to define your DQ project

Auto-Generated Project
Processes are represented by the arrows: double-click on any arrow to specify the options for each step Books represent intermediate entities (data) in the cleansing process

Cleansing steps
Transformer: non country specific transformations filtering of the data Global Router: route country specific data to a country specific cleansing process (rules will be country specific) Country Specific Transformer: at this level, you will specify which fields will be cleansed Customer Data Parser: Identify and parse names and addresses (Country Specific) Sort for Postal Matcher: improves performance for the Postal Matcher (next step) Postal Matcher: enhances data with dictionaries from the Postal Office Window Key Generator: prepares records to identify duplicates Relationship Linker: matches duplicates Commonizer: enriches duplicates with values from similar records selects the best surviving record Data Reconstructor: Constructs data for the output, cleansed data result

Run ODQ from ODI


ODQ project can be exported as a batch file Generated scripts have to be checked and cleansed in the current release (check out tutorial for details) In ODI, the OdiDataQuality tool (selected in ODI packages) will be used to graphically invoke the DQ project.

Oracle Data Integrator Versions, Solutions and Repository Migration

20
20-1

Objectives
After completing this session, you should:
Understand what ODI offers by way of Versioning Understand Use of Solutions Understand how to move objects between repositories

20-2

Versions in ODI
Objects in ODI can be Versioned
Projects, Folders, Packages, Interfaces, Procedures, Sequences, Variables, User Functions Model Folder, Models

Creating a version creates the XML definition of the object, and then stores it in compressed form in the Master Repository version tables Versions can then be restored into any connected work repository ODI Provides a Version Browser

20-3

Creating a version
Right-Mouse-Button on Object, Version/Create Does NOT version dependant objects, but DOES include them in this version Automatic allocation of Version number (over-ride able) Prompts for Version Description

20-4

ODI Version Browser

20-5

ODI Version Browser Artefacts

Object Type Selector Specific Object Selector Restore Version Export Object to XML Refresh View Delete Version(s)

20-6

Browse a specific objects versions

20-7

Version Browser
Current version is stored at object level You may restore older versions, which will replace whole object tree (may delete some objects which did not exist and if not versioned, you will not be able to get them back)
20-8

Version Visual Notifications


Each object under version management will show markers on their icons indicating:
Inserted, but not yet versioned Updated since last versioned

20-9

Exporting Objects
Object XML may be exported to XML file Can be used for storage in external Source-code-control systems Exports can also be imported into other repositories

20-10

Importing Exported Objects


Objects may be imported into repositories where they were not created Import Mode is crucial:
Synonym mode preserves objects original ID Duplication creates a new object not related to the original

20-11

Object Import Types


Duplication
Creates a new object not related to the original

Synonym Mode INSERT


Inserts object, if it already exists, it fails

Synonym Mode UPDATE


Update Object, if it does not already exist, it fails

Synonym Mode INSERT_UPDATE


Insert object if it does not exist, otherwise update it

20-12

ODI Solutions
A solution is a comprehensive and consistent set of interdependent versions of objects. Like other objects, it can be versioned, and may be restored at a later date. Solutions are saved into the master repository. A solution assembles versions of the solution's elements.

20-13

Solution
Made up of Principal Elements Principal Elements imply the required elements, which will be automatically linked Pressing the Synchronize button automatically brings the solution up to date
adds required elements removes unused elements

The Solution itself may be versioned

20-14

What to do with a solution?


Solutions may be used to move complete/consistent set of objects between repositories Create a solution, drag the principal objects in Save the Solution Version the solution In alternative repository, restore the version of the solution

20-15

Repository Migration Master


Should you be doing that? In topology File/Export Master Repository Export as a zip file Import with Master Repository Import Wizard (mimport) Give the repository a different ID to the original Does not have to be in the same technology as the original

20-16

Repository Migration - Work


Create new work repository from your Master Repository From Designer:
If Working with an existing Master, you can restore a solution If a new master, then in the original Work Repository, Export the solution, or export the entire work repository Start Designer on the new work repository, you can import either the whole repository export (File, Import/Work Repository) or the Solution (Solutions/Import Solution) Take care which mode of import: Duplication vs. Synonym

20-17

Lesson summary

Creating Creating Versions Versions

Restoring Restoring Versions Versions

Creating Creating Solutions Solutions

Exporting Objects Exporting Objects

Importing Importing Objects Objects

Utilizing Solutions Utilizing Solutions

20-18

20-19

10

You might also like