You are on page 1of 21

GOLDENGATE SIMPLIFIED EXPLORE THE POSSIBILITIES

Frank Bommarito, Chief Technology Officer, DBAK

Table of Contents

INTRODUCTION .................................................................................................................................. 3 OVERVIEW ......................................................................................................................................... 3 INSTALLATION OPTIONS ...................................................................................................................... 4 DATABASE SETUP .......................................................................................................................... 5 CONFIGURING THE MANAGER ......................................................................................................... 6 SIMPLE SETUP ................................................................................................................................... 6 STEP 1 CONFIGURE SOURCE DATABASE ...................................................................................... 7 STEP 2 CONFIGURE EXTRACT ..................................................................................................... 7 STEP 3 CONFIGURE TARGET DATABASE ...................................................................................... 8 STEP 4 CONFIGURE REPLICAT ..................................................................................................... 8 STEP 5 START EXTRACT ............................................................................................................. 9 STEP 6 INITIAL LOAD ................................................................................................................. 10 STEP 7 START REPLICAT........................................................................................................... 11 STEP 8 COMPLETE THE EXERCISE.............................................................................................. 11 MAPPING AND MANIPULATING DATA .................................................................................................. 12 STEP 1 SETUP DATABASE ......................................................................................................... 12 STEP 2 MODIFY EXTRACT ......................................................................................................... 13 STEP 3 MODIFY REPLICAT ......................................................................................................... 13 STEP 4 COMPLETE THE EXERCISE.............................................................................................. 14 ACTIVE-ACTIVE IMPLEMENTATION ..................................................................................................... 14 STEP 1 CONFIGURE EXTRACT ................................................................................................... 15 STEP 2 CONFIGURE REPLICAT ................................................................................................... 15 STEP 3 COMPLETE THE EXERCISE.............................................................................................. 16 WARM D/R ...................................................................................................................................... 17 STEP 1 - SWITCH ......................................................................................................................... 17 CREATE DATABASE COPY ............................................................................................................ 17 DDL................................................................................................................................................ 18 TROUBLESHOOTING ......................................................................................................................... 18 VERIDATA ........................................................................................................................................ 20 GG VS LOGICAL STANDBY AND ACTIVE DG ................................................................................... 20 CONCLUSION ................................................................................................................................... 21

Introduction
This document covers the installation and configuration of Oracle GoldenGate. This is designed to simplify what GoldenGate does best and summarize steps for the every day DBA. Items covered include: 1. Overview of GoldenGate 2. Installation Options 3. Data Warehousing (some tables modified with transformations) 4. Active-Active database 5. Warm D/R 6. DDL Synchronization 7. Troubleshooting common problems 8. Suggested Operational Procedures 9. Difference from GG to Streams and Logical Standby

Overview
Oracle GoldenGate is an efficient data replication tool. This tool is easy to use and offers a great deal of flexibility. This tool supports many database vendors (not just Oracle); however, this paper is limited to Oracle. This paper does not limit to a single release of Oracle or even a single Byte format. In summary GoldenGate copies data and keeps data in sync from one database to another. This copy can be sent to more than one database. Any single database can receive data from multiple GoldenGate processes. The data received can have some transformations applied. While it is true that GoldenGate reads from Oracle, GoldenGate; however, transforms data into a GoldenGate format that is version and platform independent. This transformed data is transferred and applied. GoldenGate has many implementation options. Only a few of the many implementation options must be mastered and once mastered can be adapted as far as the imagination can go. Best practices and base skills are within the few mastered components which are all covered within this paper. The largest limiting factor with GoldenGate is getting over the too good to be true limitation that many DBAs apply to this tool. This is one tool that helps to complete a current large gap that exists within the database world. GoldenGate has only these components: Component Extract Description Grab data from the source either all (initial) or changes. All changes are captured only committed transactions are sent. Can be many extracts from the same database. Runs on source system. Secondary Extract Group Allows for storage of extracted data versus just memory. Best Practice. Runs on source system Writes the changes into the database. Can use Bulk loads for initial setup. Can add a delay to the write. Can use sequences can write DDL. Runs on target system

Data Pump

Replicat

Extract Files (trails)

Checkpoints

Manager Collector

Files holding data. Can exist on Source, Target or Other. If local = Trail (GoldenGate Terminology) If not local = Remote Trail (GoldenGate Terminology) One trail per extract process (can have many extracts and each of those a trail) Data Pump writes to trails replicat reads from trails. Two Letter prefixes for the names of the files. 10MB size (default). Automatic cleanup and/or command line cleanup. An initial load is a different file than a transactional file. Current read/write positions for a process. Allows for fault-tolerance. Different for batch where restarting is often a complete do over. One manager per system running GoldenGate In charge of reporting and operational duties. Runs on a target system. Receives the files from the source nodes. Best practice would setup one collector per extract sending to the target.

The following picture is from the GoldenGate Administrator Guide:

Installation Options
Installation is required on a source system, target system and optionally an intermediate system. Installation itself is simple. Gather the set of executables for the target operating system from http://edelivery.oracle.com Execute the steps shown in this example:
-- ############################## -- Install is simply unzipping and untarring the correct file - no "installer" -- ############################## $ unzip /app/oracle/software/V18156-01.zip $ mkdir /app/oracle/GG $ cd /app/oracle/GG $ tar xf /app/oracle/software/ggs_redhatAS40_x86_ora10g_32bit_v10.4.0.19_002.tar $ export LD_LIBRARY_PATH=/app/oracle/GG:$ORACLE_HOME/lib:$LD_LIBRARY_PATH $ export PATH=$PATH:/app/oracle/GG $ export GGATE=/app/oracle/GG

For 11G only - a sym link is required $ ln -s $ORACLE_HOME/lib/libnnz11.so $ORACLE_HOME/lib/libnnz10.so -- ############################## -- Test by running -- ############################## $ ggsci (type exit to leave) -- Expect something similar to: Oracle GoldenGate Command Interpreter for Oracle Version 10.4.0.19 Build 002 Linux, x86, 32bit (optimized), Oracle 10 on Sep 17 2009 23:49:42 Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

GGSCI (host.dbaknow.com) 1> create subdirs GGSCI (host.dbaknow.com) 2> exit $ mkdir $GGATE/discard

The installation steps above are repeated on each node participating in replication. After the installation is complete, the GoldenGate components need to be configured. Most configurations are shown in the specific type examples within this paper. What is shown here are a few common components such as the manager process.

Database Setup
In a database that will be used as a source or target, a database account is required. This account is used by the manager process. This paper uses gg_manager as the Oracle Schema name.
$ sqlplus / as sysdba create user gg_manager identified by gg_manager; grant create session, alter session, resource, connect to gg_manager; grant execute on utl_file to gg_manager; $ cd $GGATE $ sqlplus gg_manager/gg_manager @marker_setup.sql (Enter gg_manager when prompted) $ sqlplus / as sysdba @ddl_setup.sql (gg_manager is the schema and INITIALSETUP is the mode yes to purge) $ sqlplus / as sysdba @role_setup.sql (gg_manager is the schema) $ sqlplus / as sysdba GRANT GGS_GGSUSER_ROLE TO gg_manager; @ddl_enable

The database itself has a few required configurations these are (Note this may change based off of the Oracle and GoldenGate release check with documentation):
alter alter alter alter database archivelog; database force logging; system set recyclebin=off scope=both; database add supplemental log data;

Configuring the Manager


The GoldenGate Manager process has a few common options. The administrator guide overviews the entire process. What is shown here are common settings needed to make the examples in this paper function.
$ ggsci GGSCI> edit params mgr PORT 7809 USERID gg_manager, PASSWORD gg_manager -- Local file system - files created are prefixed with "ex" PURGEOLDEXTRACTS /app/oracle/GG/dirdat/ex, USECHECKPOINTS GGSCI> start manager

Other parameters that are useful and not shown (details in the administrator guide) are: AUTOSTART AUTORESTART Parameters used above are: PURGEOLDEXTRACTS Removes trail files after they are no longer needed. Using this at the manager level allows one trail to be used for many extracts or replicats Allows the usage of the checkpoint table.

USECHECKPOINTS

Simple Setup
In this example, a simple setup is completed. Here we have the following: Component Source Database Target Database Source Schema Target Schema Source Table Target Table Source OS Target OS Description SRCDB (10.2.0.4) TRGDB (10.2.0.4) GG_TEST GG_TEST GG_TABLE GG_TABLE Redhat ES4 Linux x86 (32bit) (Host=GGSRC) Redhat ES4 Linux x86 (32bit) (Host=GGTRG)

An environment file is suggested. Here the sample used:


$ cat ~/GG.env export LD_LIBRARY_PATH=/app/oracle/GG:$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$PATH:/app/oracle/GG export GGATE=/app/oracle/GG export ORACLE_SID=GGSRC export ORAENV_ASK=NO . oraenv unset ORAENV_ASK

The goal is to table the source table execute an initial load to the target and then synchronize changes made to the source table. GoldenGate is installed one time on each host machine as showing above. The manager is running on each machine.

Step 1 Configure Source Database


The following commands are executed:
$ hostname GGSRC $ . ~/GG.env $ sqlplus / as sysdba GGSRC> create user gg_test identified by gg_test; User created. GGSRC> grant create session, create sequence, create table, unlimited tablespace to gg_test; Grant succeeded. GGSRC> connect gg_test/gg_test GGSRC> create table gg_table (gg_number number, gg_varchar varchar2(200)); Table created. GGSRC> alter table gg_table add primary key(gg_number); Table altered. GGSRC> create sequence gg_seq; Sequence created. GGSRC> insert into gg_table values (gg_seq.nextval,'TEST'); 1 row created. GGSRC> insert into gg_table select gg_seq.nextval,'New' from gg_table; 1 row created. -- Repeat this same command until 131072 rows created. GGSRC> commit; GGSRC> select count(*) from gg_table; COUNT(*) ---------262144 GGSRC> create table gg_table2 as select * from gg_table; Table created.

Step 2 Configure Extract


The following commands are executed:
$ hostname GGSRC $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> add extract e_test1, tranlog, begin now EXTRACT added.

GGSCI> add exttrail /app/oracle/GG/dirdat/et, extract e_test1 EXTTRAIL added. GGSCI> add rmttrail /app/oracle/GG/dirdat/et, extract e_test1 RMTTRAIL added. GGSCI> edit params e_test1 EXTRACT e_test1 USERID gg_manager, PASSWORD gg_manager RMTHOST ggtrg.dbaknow.com, MGRPORT 7809 RMTTRAIL /app/oracle/GG/dirdat/et TABLE gg_test.gg_table; A second extract is being created here to demonstrate an alternate method for a table sync.

GGSCI> add extract e_test2, sourceistable EXTRACT added. GGSCI> edit params e_test2 EXTRACT e_test2 USERID gg_manager, PASSWORD gg_manager RMTHOST ggtrg.dbaknow.com, MGRPORT 7809 RMTTASK replicat, GROUP r_test2 TABLE gg_test.gg_table2;

Keywords Used: Keyword EXTRACT EXTTRAIL RMTTRAIL RMTHOST TABLE SOURCEISTABLE Description The name of the extract. Use for starting and stopping. The name of the data pump file location The name of the remote trail files The hostname of the remote/target machine The table that will be extracted (source data) Utilize a table as a source for an initial load as opposed to a manual method. No trail file.

Step 3 Configure Target Database


The following commands are executed:
$ hostname GGTRG $ sqlplus / as sysdba GGTRG> create user gg_test identified by gg_test; User created. GGTRG> grant create session, create sequence, create table, unlimited tablespace to gg_test; Grant succeeded.

Step 4 Configure Replicat


The following commands are executed:
$ hostname GGTRG $ . ~/GG.env $ cd $GGATE

$ ggsci GGSCI> edit params ./GLOBAL GGSCHEMA gg_manager CHECKPOINTTABLE gg_manager.checkpoint_table GGSCI> dblogin userid gg_manager , password gg_manager Successfully logged into database. GGSCI> add checkpointtable gg_manager.checkpoint_table Successfully created checkpoint table GG_MANAGER.CHECKPOINT_TABLE. GGSCI> add replicat r_test1, exttrail /app/oracle/GG/dirdat/et,checkpointtable gg_manager.checkpoint_table REPLICAT added. GGSCI> edit params r_test1 REPLICAT r_test1 ASSUMETARGETDEFS USERID gg_manager , PASSWORD gg_manager DISCARDFILE /app/oracle/GG/discard/r_test1.dis MAP gg_test.*, target gg_test.*; Below here is the ISTABLE counterpart method

GGSCI> add replicat r_test2, specialrun REPLICAT added. GGSCI> edit params r_test2 REPLICAT r_test2 ASSUMETARGETDEFS USERID gg_manager , PASSWORD gg_manager MAP gg_test.gg_table2 target gg_test.gg_table2;

Keywords Used: Keyword GGSCHEMA CHECKPOINTTABLE DBLOGIN ASSUMETARGETDEFS USERID MAP SPECIALRUN Description The name of the GoldenGate schema setup previously. The name of the GoldenGate check point table Authentication information note that security and encryption can be applied. States the source table matches the target table Authentication information note that security and encryption can be applied. Translation between source and target. This identifies this as a one-time task.

Step 5 Start Extract


The extract is started prior to the database restore. The initial load will be done manually with exp/imp. Starting the extract prior to the initial load ensures that transactions made are not lost as shown in this example.
$ hostname GGSRC $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> start manager Manager started. GGSCI> start extract e_test1 Sending START request to MANAGER ... EXTRACT E_TEST1 starting GGSCI> info all Program Status Group Lag

Time Since Chkpt

MANAGER EXTRACT

RUNNING RUNNING

E_TEST1

00:00:00

00:58:08

GGSCI> start extract e_test2 Sending START request to MANAGER ... EXTRACT E_TEST2 starting --- Note the remote table must exist (as an empty table) before running this command creation of this empty table is not shown GGSCI> view report e_test2 (Repeat until the job is done)

Keywords Used: Keyword INFO ALL Description Give a quick status of all jobs running

Step 6 Initial Load


The table is manually loaded using exp/imp. This could be any method CTAS, RMAN, Oracle Data Pump, etc The following commands are executed:
$ hostname GGSRC $ . ~/GG.env $ exp tables=gg_test.gg_table file=gg_test Export: Release 10.2.0.4.0 - Production on Mon Apr 5 10:24:06 2010 Copyright (c) 1982, 2007, Oracle. Username: / as sysdba Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export done in UTF8 character set and AL16UTF16 NCHAR character set About to export specified tables via Conventional Path ... Current user changed to GG_TEST . . exporting table GG_TABLE 262144 rows exported Export terminated successfully without warnings. $ sqlplus gg_test/gg_test GGSRC> insert into gg_table values (gg_seq.nextval,'Post Export'); 1 row created. GGSRC> commit; $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> info all Program Status MANAGER EXTRACT RUNNING ABENDED All rights reserved.

Group E_TEST1

Lag 00:00:00

Time Since Chkpt 01:05:19

GGSCI> exit $ $ tail -5 ggserr.log 2010-04-05 10:21:08 GGS WARNING 150 Oracle GoldenGate Capture for e_test1.prm: TCP/IP error 111 (Connection refused). 2010-04-05 10:21:18 GGS WARNING 150 Oracle GoldenGate Capture for e_test1.prm: TCP/IP error 111 (Connection refused). 2010-04-05 10:21:28 GGS WARNING 150 Oracle GoldenGate Capture for e_test1.prm: TCP/IP error 111 (Connection refused). 2010-04-05 10:21:38 GGS ERROR 150 Oracle GoldenGate Capture for e_test1.prm: TCP/IP error 111 (Connection refused); retries exceeded. 2010-04-05 10:21:38 GGS ERROR 190 Oracle GoldenGate Capture for

Oracle, Oracle, Oracle, Oracle, Oracle,

e_test1.prm:

PROCESS ABENDING.

The extract aborts because the replicat is not yet working and the data cannot be transferred. The section on operational procedures covers this in more detail. They key here is that this is currently expected. The following is used to complete the initial data load:
$ hostname GGSRC $ scp gg_test.dmp GGTRG: oracle@GGTRG's password: gg_test.dmp 3328KB 3.3MB/s 00:01

100%

$ hostname GGTRG $ imp file=gg_test.dmp full=y Import: Release 10.2.0.4.0 - Production on Wed Apr 7 03:59:08 2010 Copyright (c) 1982, 2005, Oracle. Username: / as sysdba Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional path import done in US7ASCII character set and AL16UTF16 NCHAR character set import server uses UTF8 character set (possible charset conversion) export client uses UTF8 character set (possible charset conversion) . importing SYS's objects into SYS . importing GG_TEST's objects into GG_TEST . . importing table "GG_TABLE" 262144 rows imported Import terminated successfully without warnings. All rights reserved.

Step 7 Start Replicat


The Replicat process runs on the target database and loads data.
$ hostname GGTRG $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> start manager Manager started. GGSCI> start replicat r_test1 Sending START request to MANAGER ... REPLICAT R_TEST1 starting GGSCI> INFO ALL Program Status Group Lag MANAGER REPLICAT RUNNING RUNNING R_TEST1 00:00:00

Time Since Chkpt 00:00:09

Step 8 Complete the exercise

The following commands are used to complete this exercise:


$ hostname GGSRC $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> start extract e_test1 Sending START request to MANAGER ... REPLICAT R_TEST1 starting GGSCI> INFO ALL Program Status Group Lag MANAGER REPLICAT RUNNING RUNNING R_TEST1 00:00:00

Time Since Chkpt 00:00:09

$ hostname GGSRC $ sqlplus gg_test/gg_test GGSRC> select count(*) from gg_table; COUNT(*) ---------262145 $ hostname GGTRG $ sqlplus gg_test/gg_test GGSRC> select count(*) from gg_table; COUNT(*) ---------262145

Mapping and Manipulating Data


Oracle GoldenGate can be used for data warehousing. This means any/all of the following: 1. Multiple sources 2. Remapping into the target 3. Transformation 4. Other. This example builds on the simple setup from above. This now moves data into a new schema and remaps a column as well as executes some data transformation. In this example, the following represents changes only: Component Source Schema Target Schema Source Table Target Table Transformation Filter Description GG_TEST GG_NEW GG_TABLE GG_TABLE and GG_NEW Data in a range is modified Only certain data is transferred

Step 1 Setup Database


To move data to table GG_NEW this table is pre-created as shown here:

$ hostname GGSRC $ sqlplus gg_test/gg_test GGSRC> create table gg_new (GG_NUMBER number, GG_VARCHAR varchar2(200)); Table created. GGSRC> insert into gg_new values (1,'Test'); 1 row created. GGSRC> commit; Commit complete. $ hostname GGTRG $ sqlplus / as sysdba GGTRG> create user gg_new identified by gg_new; User created. GGTRG> grant create session, create sequence, create table, unlimited tablespace to gg_new; Grant succeeded. GGTRG> connect gg_new/gg_new Connected. GGTRG> create table gg_new (GG_NUMBER number, GG_VARCHAR varchar2(200)); Table created. GGTRG> alter table gg_new add (gg_new_number number); Table altered. GGTRG> create sequence gg_seq; Sequence created. GGTRG> connect gg_test/gg_test Connected. GGTRG> create table gg_new (GG_NUMBER number, GG_VARCHAR varchar2(200)); Table created.

Step 2 Modify Extract


To move data to table GG_NEW this table is pre-created as shown here:
$ hostname GGSRC $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> stop extract e_test1 Sending STOP request to EXTRACT E_TEST1 ... Request processed. GGSCI> edit params e_test1 EXTRACT e_test1 USERID gg_manager, PASSWORD gg_manager RMTHOST ggtrg.dbaknow.com, MGRPORT 7809 RMTTRAIL /app/oracle/GG/dirdat/et TABLE gg_test.gg_new; GGSCI> start extract e_test1 Sending START request to MANAGER ... REPLICAT R_TEST1 starting

Step 3 Modify Replicat


Notice the addition of filter and mapping clauses:
$ hostname GGTRG $ . ~/GG.env $ cd $GGATE $ ggsci

GGSCI> stop replicate r_test1 Sending STOP request to REPLICAT R_TEST1 ... Request processed. GGSCI> edit params r_test1 REPLICAT r_test1 ASSUMETARGETDEFS USERID gg_manager , PASSWORD gg_manager DISCARDFILE /app/oracle/GG/discard/r_test1.dis MAP gg_test.gg_new target gg_new.gg_new COLMAP (USEDEFAULTS, GG_NEW_NUMBER = @IF (GG_NUMBER > 100, 1, 10) ); MAP gg_test.gg_new target gg_test.gg_new filter (gg_number > 100); GGSCI> start replicat r_test1 Sending START request to MANAGER ... REPLICAT R_TEST1 starting

Step 4 Complete the exercise


The following commands are used to complete this exercise:
$ hostname GGSRC $ sqlplus gg_test/gg_test GGSRC> insert into gg_new values (1,'Test'); 1 row created. GGSRC> insert into gg_new values (1000,'New Test'); 1 row created. GGSRC> commit; Commit complete. $ hostname GGTRG $ sqlplus gg_test/gg_test GGTRG> select * from gg_new; GG_NUMBER GG_VARCHAR --------- ---------1000 New Test $ sqlplus gg_new/gg_new GGTRG> select * from gg_new; GG_NUMBER --------1000 1 GG_VARCHAR GG_NEW_NUMBER ---------- ------------New Test 1 Test 10

Active-Active Implementation
An Active-Active implementation is when two databases both server as an extract and replicat for the same objects. Here, the simple example above is used on both nodes. The only differences are noted below. In the EXTRACT File add: TRANLOGOPTIONS EXCLUDEUSER gg_manager Doing so will prevent transaction made by GG_MANAGER from being replicated back.

Note if sequences are used as part of a primary key it is the application responsibility to ensure that both sides do not generate the same keys. The SEQUENCE and MAP keywords can be used by GoldenGate; however, different ranges or settings are best. The steps below are summarized from the simple setup.

Step 1 Configure Extract


The following commands are executed:
$ hostname GGSRC (Run on GGTRG too) $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> add extract e_test1, tranlog, begin now EXTRACT added. GGSCI> add exttrail /app/oracle/GG/dirdat/et, extract e_test1 EXTTRAIL added. GGSCI> add rmttrail /app/oracle/GG/dirdat/et, extract e_test1 RMTTRAIL added. GGSCI> edit params e_test1 EXTRACT e_test1 USERID gg_manager, PASSWORD gg_manager TRANLOGOPTIONS EXCLUDEUSER gg_manager RMTHOST ggtrg.dbaknow.com, MGRPORT 7809 RMTTRAIL /app/oracle/GG/dirdat/et TABLE gg_test.* GGSCI> start extract e_test1

Step 2 Configure Replicat


The following commands are executed:
$ hostname GGTRG (On GGSRC too) $ . ~/GG.env $ cd $GGATE $ ggsci GGSCI> edit params ./GLOBAL GGSCHEMA gg_manager CHECKPOINTTABLE gg_manager.checkpoint_table GGSCI> dblogin userid gg_manager , password gg_manager Successfully logged into database.

GGSCI> add checkpointtable gg_manager.checkpoint_table Successfully created checkpoint table GG_MANAGER.CHECKPOINT_TABLE. GGSCI> add replicat r_test1, exttrail /app/oracle/GG/dirdat/et,checkpointtable gg_manager.checkpoint_table REPLICAT added. GGSCI> edit params r_test1 REPLICAT r_test1 ASSUMETARGETDEFS USERID gg_manager , PASSWORD gg_manager HANDLECOLLISIONS DISCARDFILE /app/oracle/GG/discard/r_test1.dis MAP gg_test.*, target gg_test.*; GGSCI> start replicat r_test1

Step 3 Complete the exercise


The following commands are used to complete this exercise:
$ hostname GGSRC $ sqlplus gg_test/gg_test GGSRC> select count(*) from gg_table; COUNT(*) ---------262145 GGSRC> delete gg_table where rownum < 2; GGSRC> commit; GGSRC> select count(*) from gg_table; COUNT(*) ---------262144 GGSRC> Insert into gg_table values (-1,'Try'); GGSRC> Commit; GGSRC> Select * from gg_table where gg_number = -1; GG_NUMBER GG_VARCHAR --------- ----------1 Try $ hostname GGTRG $ sqlplus gg_test/gg_test GGTRG> select count(*) from gg_table; COUNT(*) ---------262144 GGTRG> Select * from gg_table where gg_number = -1; GG_NUMBER GG_VARCHAR --------- ----------1 Try GGTRG> Insert into gg_table values (-2,'Try'); 1 row created. GGTRG> delete gg_table where gg_number = -1; 1 row deleted. GGTRG> commit; GGTRG> Select * from gg_table where gg_number < 0; GG_NUMBER GG_VARCHAR --------- ----------2 Try $ hostname GGSRC $ sqlplus gg_test/gg_test GGSRC> Select * from gg_table where gg_number < 0; GG_NUMBER GG_VARCHAR --------- ----------2 Try

Warm D/R
For the purposes of this paper, Warm D/R has already been covered. That being said, what is different is starting with a target database that is a restored copy from Prod. Here, the standby is used as read only while the active is read write. Processes for after the standby is activated are in place but not active. This setup is similar to active/active. The differences are the return processes are not active.

Step 1 - Switch
The setup is identical to active/active. If activation of the standby is required:
LAG EXTRACT e_test1 (repeat STOP EXTRACT e_test1

until the message At EOF, no more records to process is returned)

On Standby
STATUS REPLICAT r_test1(Ensure EOF message) STOP REPLICAT r_test1 ALTER EXTRACT e_test2, BEGIN NOW START EXTRACT e_test2

On Primary
START REPLICAT r_test2

Repeat above to switch back.

Create Database Copy


To copy the database from standby to primary system 1. On the primary system, run scripts to disable triggers and cascade delete constraints. 2. On the standby system, start making a hot copy of the database. 3. On the standby system, record the time at which the copy finishes. 4. On the standby system, stop user access to the applications. Allow all open transactions to be completed. To propagate data changes made during the copy 1. On the primary system, start Replicat.
START REPLICAT <rep_2>

2. On the live standby system, start the data pump. This begins transmission of the accumulated user transactions from the standby to the trail on the primary system.
START EXTRACT <pump_2>

3. On the primary system, issue the INFO REPLICAT command until you see that it posted all of the data

changes that users generated on the standby system during the initial load. Refer to the time that you recorded previously. For example, if the copy stopped at 12:05, make sure that change replication has posted data up to that point.
INFO REPLICAT <rep_2>

4. On the primary system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.
SEND REPLICAT <rep_2>, NOHANDLECOLLISIONS

5. On the primary system, issue the STATUS REPLICAT command until it returns At EOF (end of file) to confirm that Replicat applied all of the data from the trail to the database.
STATUS REPLICAT <rep_2>

6. On the live standby system, stop the data pump. This stops transmission of any user transactions from the standby to the trail on the primary system.
STOP EXTRACT <pump_2>

7. On the primary system, stop the Replicat process.


STOP REPLICAT <rep_2>

At this point in time, the primary and standby databases should be in a state of synchronization again. (Optional) To verify synchronization 1. Use a compare tool, such as GoldenGate Veridata, to compare the source and standby databases for parity. 2. Use a repair tool, such as GoldenGate Veridata, to repair any out-of-sync conditions. To switch users to the primary system 1. On the primary system, run the script that grants insert, update, and delete permissions to the users of the business applications. 2. On the primary system, run the script that enables triggers and cascade delete constraints. 3. On the primary system, run the scripts that fail over the application server, start applications, and copy essential files that are not part of the replication environment. 4. On the primary system, start the primary Extract process.
START EXTRACT <ext_1>

5. On the primary system, allow users to access the applications.

DDL
All examples so far have been with DML. DDL can also be replicated. This sample simply modifies the Simple Setup from this paper to add DDL. Note DDL cannot be setup in a bi-directional manner. This can be done with a warm standby. The simple setup above enables DDL support. The only addition is a keyword added to the extract and replicat parameter file. Extract: DDL include mapped objname gg_test.*; Replicat: DDL

Troubleshooting
Common commands are required for normal procedures. These are the most popular. 1. One DML error occurred how can I skip this transaction and proceed? 2. A DDL statement failed how to fix?

3. 4.

Replicat keeps aborting how do I fix? A table is no longer in sync, what can I do? Veridata Tool

Keywords exist that are added to the parameter files. These keywords dictate actions for problem when problems occur. For DDL DDLERROR (i.e. in Replicat - DDLERROR 942 IGNORE) For DML: REPERROR Parameter ABEND DISCARD EXCEPTION IGNORE RETRYOP TRANSABORT RESET Description Stop and abend replicat Move the error to the discard file and continue processing Send the error to an exception handler As if the error never occurred. Retry up to a set number of max times Can add delays before retries Clear any rules and put back to ABEND

The following is a common exception handler Put the bad row into a table:
REPERROR (DEFAULT, EXCEPTION) MAP gg_test.gg_table, TARGET gg_new.gg_new, & COLMAP (USEDEFAULTS); MAP gg_test.gg_table, TARGET gg_new.gg_new_exception, & EXCEPTIONSONLY, & INSERTALLRECORDS & COLMAP (USEDEFAULTS, & DML_DATE = @DATENOW(), & OPTYPE = @GETENV("LASTERR", "OPTYPE"), & DBERRNUM = @GETENV("LASTERR", "DBERRNUM"), &

Other Options within parameter files


REPERROR default, DISCARD (Sends failed records to the discard file) END <Date> (2011-02-11 02:52:00) (Stop processing when the timestamp is reached)

Useful Commands
To view current status GGSRC> Info <extract_name>, showch To restart replicats from a point in time. GGTRG> alter replicat <replicat name>, begin 2011-02-21 23:00:00 To restart from a specific RBA (note that the extseqno = sequence number which correlates to the trailfile number) GGTRG> alter replicat <replicat name>, extseqno 4524, extrba 14871661

Logdump options
pos 0 positions cursor at beginning of file

count returns the number of records in the file n skip to next record sfh prev to go back one record skip 1000 skip next 1,000 records filter include filename SPECIFIC_TABLE_NAME filters on only records for that table (make sure filter is on) count detail gives record counts for each table, broken down by transaction type count start 2010-04-30 00:30:00, end 2010-04-30 01:00:00 count the number of records between these times count file SPECIFIC_TABLE_NAME provides detailed counts for just the table specified To find the quiet time in a trailfile - Determine the actual downtime from the logs Locate trailfile that spans the quiet time count start 2010-04-28 20:58:36, end 2010-04-28 23:00:00 LogTrail <trail name> has 636602 records pos 0 skip 636601 (note that this is #records 1) n n (the 2 records will be just before and just after the quiet time) a truncate operation will have type of 100 - GGSPurgedata

Veridata
Veridata is a tool that is installed and licensed separately but is part of the Oracle GoldenGate product set. This tool verifies data. After databases are running, veridata is used to ensure synronization is correct and can apply fixes if required. GoldenGate Veridata compares one set of data to another and identifies data that is out of synchronization. GoldenGate Veridata supports high-volume, 24x7 replication environments where downtime to compare data sets is not an option. By accounting for data that is being replicated while a comparison takes place, GoldenGate Veridata can run concurrently with data transactions and replication, while still producing an accurate comparison report. GoldenGate Veridata can compare data across different database platforms including Oracle. It will map column data types automatically, or you can map columns manually in cases where the automatic mapping is not sufficient to accommodate format differences in a heterogeneous environment. For detailed information about this feature, see the DBAK Paper on Veridata.

GG vs Logical Standby and Active DG


Oracle has other tools, Logical Standby, Streams and Active Data Guard. These other tools seem to do what GoldenGate already does why GoldenGate? GoldenGate does these items that are not done by the other products: 1. Works across database versions, platforms and operating systems 2. Supports active/active configurations 3. Create an intermediate transfer method that can be parallelized (not just redo logs) 4. Can compare databases to check for synchronization (i.e. Veridata) 5. Considered a Logical copy not a physical copy 6. Provides a series of features - such as table mapping, transformations and multiple sources 7. Veridata can be used to validate databases Logical Standby and Data Guard 1. Use Oracle Archived Log files 2. Require like Oracle environments 3. Data Guard is a physical copy Streams 1. Data is queued in the database

2. 3. 4.

Slower than GG Increased administration Only Oracle to Oracle

Conclusion
Oracle GoldenGate is an asynchronous, log-based, real-time data replication product that moves high volumes of transactional data between heterogeneous databases with very low latency. A typical environment includes a capture, pump and delivery process. Each of these processes can run on most of the popular operating systems and databases including Oracle and non-Oracle. All or a portion of the data may be replicated, and the data within any of these processes may be manipulated for not only heterogeneous environments but also different database schemas. Oracle GoldenGate supports multi-master replication, hub and spoke deployment and data transformation, providing customers very flexible options to address the complete range of replication requirements. Oracle GoldenGate is also an excellent product for minimizing downtime during planned maintenance, including application and database upgrades, in addition to platform migrations. Oracle GoldenGate is an Oracle product sold independent of the Oracle Database for Oracle and third-party database management systems. It is available for both Oracle Database Enterprise Edition and Oracle Database Standard Edition.

You might also like