Quantcast
Channel: GoldenGate – Oracle DBA – Tips and Techniques
Viewing all 80 articles
Browse latest View live

GoldenGate Tutorial 2 – Installation (Oracle 11g on Linux)

$
0
0

This example will illustrate the installation of Oracle GoldenGate on an RHEL 5 platform. We had in an earlier post discussed the architecture and various components of a GoldenGate environment.

GoldenGate software is also available on OTN but for our platform we need to download the required software from the Oracle E-Delivery web site.

Select the Product Pack “Oracle Fusion Middleware” and the platform Linux X86-64.

Then select “Oracle GoldenGate on Oracle Media Pack for Linux x86-64″ and since we are installing this for an Oracle 11g database, we download “Oracle GoldenGate V10.4.0.x for Oracle 11g 64bit on RedHat 5.0″

$ unzip V18159-01.zip
Archive: V18159-01.zip
inflating: ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar

$tar -xvof ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar

$ export PATH=$PATH:/u01/oracle/ggs

$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/oracle/ggs

$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 17 2009 23:51:28

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

GGSCI (redhat346.localdomain) 1>

GGSCI (redhat346.localdomain) 1> CREATE SUBDIRS

Creating subdirectories under current directory /u01/app/oracle/product/11.2.0/dbhome_1

Parameter files /u01/oracle/ggs/dirprm: created
Report files /u01/oracle/ggs/dirrpt: created
Checkpoint files /u01/oracle/ggs/dirchk: created
Process status files /u01/oracle/ggs/dirpcs: created
SQL script files /u01/oracle/ggs/dirsql: created
Database definitions files /u01/oracle/ggs/dirdef: created
Extract data files /u01/oracle/ggs/dirdat: created
Temporary files /u01/oracle/ggs/dirtmp: created
Veridata files /u01/oracle/ggs/dirver: created
Veridata Lock files /u01/oracle/ggs/dirver/lock: created
Veridata Out-Of-Sync files /u01/oracle/ggs/dirver/oos: created
Veridata Out-Of-Sync XML files /u01/oracle/ggs/dirver/oosxml: created
Veridata Parameter files /u01/oracle/ggs/dirver/params: created
Veridata Report files /u01/oracle/ggs/dirver/report: created
Veridata Status files /u01/oracle/ggs/dirver/status: created
Veridata Trace files /u01/oracle/ggs/dirver/trace: created
Stdout files /u01/oracle/ggs/dirout: created

We then need to create a database user which will be used by the GoldenGate Manager, Extract and Replicat processes. We can create individual users for each process or configure just a common user – in our case we will create the one user GGS_OWNER and grant it the required privileges.

SQL> create tablespace ggs_data
2 datafile ‘/u02/oradata/gavin/ggs_data01.dbf’ size 200m;

SQL> create user ggs_owner identified by ggs_owner
2 default tablespace ggs_data
3 temporary tablespace temp;

User created.

SQL> grant connect,resource to ggs_owner;

Grant succeeded.

SQL> grant select any dictionary, select any table to ggs_owner;

Grant succeeded.

SQL> grant create table to ggs_owner;

Grant succeeded.

SQL> grant flashback any table to ggs_owner;

Grant succeeded.

SQL> grant execute on dbms_flashback to ggs_owner;

Grant succeeded.

SQL> grant execute on utl_file to ggs_owner;

Grant succeeded.

We can then confirm that the GoldenGate user we have just created is able to connect to the Oracle database

$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
AIX 5L, ppc, 64bit (optimized), Oracle 11 on Sep 17 2009 23:54:16

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

GGSCI (devu007) 1> DBLOGIN USERID ggs_owner, PASSWORD ggs_owner
Successfully logged into database.

We also need to enable supplemental logging at the database level otherwise we will get this error when we try to start the Extract process -

2010-02-08 13:51:21 GGS ERROR 190 No minimum supplemental logging is enabled. This may cause extract process to handle key update incorrectly if key
column is not in first row piece.

2010-02-08 13:51:21 GGS ERROR 190 PROCESS ABENDING.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Database altered

Coming Next! – configuring the Manager, Extract and Replicat processes and setting up online Change Synchronization


GoldenGate Tutorial 3 – Configuring the Manager process

$
0
0

The Oracle GoldenGate Manager performs a number of functions like starting the other GoldenGate processes, trail log file management and reporting.

The Manager process needs to be configured on both the source as well as target systems and configuration is carried out via a parameter file just as in the case of the other GoldenGate processes like Extract and Replicat.

After installation of the software, we launch the GoldenGate Software Command Interface (GGSCI) and issue the following command to edit the Manager parameter file

EDIT PARAMS MGR

The only mandatory parameter that we need to specify is the PORT which defines the port on the local system where the manager process is running. The default port is 7809 and we can either specify the default port or some other port provided the port is available and not restricted in any way.

Some other recommended optional parameters are AUTOSTART which which automatically start the Extract and Replicat processes when the Manager starts.

The USERID and PASSWORD parameter and required if you enable GoldenGate DDL support and this is the Oracle user account that we created for the Manager(and Extract/Replicat) as described in the earlier tutorial.

The Manager process can also clean up trail files from disk when GoldenGate has finished processing them via the PURGEOLDEXTRACTS parameter. Used with the USECHECKPOINTS clause, it will ensure that until all processes have fnished using the data contained in the trail files, they will not be deleted.

The following is an example of a manager parameter file

[oracle@redhat346 ggs]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 17 2009 23:51:28

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

GGSCI 2> EDIT PARAMS MGR

PORT 7809
USERID ggs_owner, PASSWORD ggs_owner
PURGEOLDEXTRACTS /u01/oracle/ggs/dirdat/ex, USECHECKPOINTS

The manager can be stopped and started via the GSSCI commands START MANAGER and STOP MANAGER .

Information on the status of the Manager can be obtained via the INFO MANAGER command

GGSCI (devu007) 4> info manager

Manager is running (IP port devu007.7809).

Coming Next – Configuring an Online Extract and Replicat Group …..

Oracle GoldenGate Tutorial 4 – performing initial data load

$
0
0

This example illustrates using the GoldenGate direct load method to extract records from an Oracle 11g database on Red Hat Linux platform and load the same into an Oracle 11g target database on an AIX platform.

The table PRODUCTS in the SH schema on the source has 72 rows and on the target database the same table is present only in structure without any data. We will be loading the 72 rows in this example from the source database to the target database using GoldenGate Direct Load method.

On Source

1) Create the Initial data extract process ‘load1′

GGSCI (redhat346.localdomain) 5> ADD EXTRACT load1, SOURCEISTABLE
EXTRACT added.

Since this is a one time data extract task, the source of the data is not the transaction log files of the RDBMS (in this case the online and archive redo log files) but the table data itself, that is why the keyword SOURCEISTABLE is used.

2) Create the parameter file for the extract group load1

EXTRACT: name of the extract group
USERID/PASSWORD: the database user which has been configured earlier for Extract ( this user is created in the source database)
RMTHOST: This will be the IP address or hostname of the target system
MGRPORT: the port where the Manager process is running
TABLE: specify the table which is being extracted and replicated. This can be specified in a number of ways using wildcard characters to include or exclude tables as well as entire schemas.

GGSCI (redhat346.localdomain) 6> EDIT PARAMS load1

EXTRACT load1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST devu007, MGRPORT 7809
RMTTASK replicat, GROUP load2
TABLE sh.products;

On Target

3) Create the initial data load task ‘load2′

Since this is a one time data load task, we are using the keyword SPECIALRUN

GGSCI (devu007) 1> ADD REPLICAT load2, SPECIALRUN
REPLICAT added.

4) Create the parameter file for the Replicat group, load2

REPLICAT: name of the Replicat group created for the initial data load
USERID/PASSWORD: database credentials for the Replicat user (this user is created in the target database)
ASSUMETARGETDEFS: this means that the source table structure exactly matches the target database table structure
MAP: with GoldenGate we can have the target database structure entirely differ from that of the source in terms of table names as well as the column definitions of the tables. This parameter provides us the mapping of the source and target tables which is same in this case

GGSCI (devu007) 2> EDIT PARAMS load2

“/u01/oracle/software/goldengate/dirprm/rep4.prm” [New file]

REPLICAT load2
USERID ggs_owner, PASSWORD ggs_owner
ASSUMETARGETDEFS
MAP sh.customers, TARGET sh.customers;

On Source

SQL> select count(*) from products;

COUNT(*)
———-
72

On Target

SQL> select count(*) from products;

COUNT(*)
———-
0

On Source

5) Start the initial load data extract task on the source system

We now start the initial data load task load 1 on the source. Since this is a one time task, we will initially see that the extract process is runningand after the data load is complete it will be stopped. We do not have to manually start the Replicat process on the target as that is done when the Extract task is started on the source system.

On Source

GGSCI (redhat346.localdomain) 16> START EXTRACT load1

Sending START request to MANAGER …
EXTRACT LOAD1 starting

GGSCI (redhat346.localdomain) 28> info extract load1

EXTRACT LOAD1 Last Started 2010-02-11 11:33 Status RUNNING
Checkpoint Lag Not Available
Log Read Checkpoint Table SH.PRODUCTS
2010-02-11 11:33:16 Record 72
Task SOURCEISTABLE

GGSCI (redhat346.localdomain) 29> info extract load1

EXTRACT LOAD1 Last Started 2010-02-11 11:33 Status STOPPED
Checkpoint Lag Not Available
Log Read Checkpoint Table SH.PRODUCTS
2010-02-11 11:33:16 Record 72
Task SOURCEISTABLE

On Target

SQL> select count(*) from products;

COUNT(*)
———-
72

Coming Soon! – Creating an Online Extract and Replicat Group for Change Synchronization …..

Oracle GoldenGate Tutorial 5 – configuring online change synchronization

$
0
0

In our earlier tutorial, we examined how to create a GoldenGate environment for initial data capture and load.

In this tutorial, we will see how by using GoldenGate change synchronization, changes that occur on the source (Oracle 11g on Linux) are applied near real time on the target (Oracle 11g on AIX). The table on the source is the EMP table in SCOTT schema which is being replicated to the EMP table in the target database SH schema.

These are the steps that we will take:

Create a GoldenGate Checkpoint table
Create an Extract group
Create a parameter file for the online Extract group
Create a Trail
Create a Replicat group
Create a parameter file for the online Replicat group

Create the GoldenGate Checkpoint table

GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown. This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.

We can also create a single Checkpoint table which can used by all Replicat groups from the single or many GoldenGate instances.

In one of the earlier tutorials we had created the GLOBALS file. We now need to edit that GLOBALS file and add an entry for CHECKPOINTTABLE which will include the checkpoint table name which will be available to all Replicat processes via the EDIT PARAMS command.

GGSCI (devu007) 2> EDIT PARAMS ./GLOBALS

GGSCHEMA GGS_OWNER
CHECKPOINTTABLE GGS_OWNER.CHKPTAB

GGSCI (devu007) 4> DBLOGIN USERID ggs_owner, PASSWORD ggs_owner
Successfully logged into database.

GGSCI (devu007) 6> ADD CHECKPOINTTABLE GGS_OWNER.CHKPTAB

Successfully created checkpoint table GGS_OWNER.CHKPTAB.

apex:/u01/oracle/software/goldengate> sqlplus ggs_owner/ggs_owner

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 8 09:02:19 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc chkptab

Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP_NAME                                NOT NULL VARCHAR2(8)
 GROUP_KEY                                 NOT NULL NUMBER(19)
 SEQNO                                              NUMBER(10)
 RBA                                       NOT NULL NUMBER(19)
 AUDIT_TS                                           VARCHAR2(29)
 CREATE_TS                                 NOT NULL DATE
 LAST_UPDATE_TS                            NOT NULL DATE
 CURRENT_DIR                               NOT NULL VARCHAR2(255)

Create the Online Extract Group

GGSCI (redhat346.localdomain) 1> ADD EXTRACT ext1, TRANLOG, BEGIN NOW
EXTRACT added.

Create the Trail

We now create a trail – note that this path pertains to the GoldenGate software location on the target system and this is where the trail files will be created having a prefix ‘rt’ which will be used by the Replicat process also running on the target system

GGSCI (redhat346.localdomain) 2> ADD RMTTRAIL /u01/oracle/software/goldengate/dirdat/rt, EXTRACT ext1
RMTTRAIL added.

Create a parameter file for the online Extract group ext1

GGSCI (redhat346.localdomain) 3> EDIT PARAMS ext1

EXTRACT ext1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST devu007, MGRPORT 7809
RMTTRAIL /u01/oracle/software/goldengate/dirdat/rt
TABLE scott.emp;

ON TARGET SYSTEM

Create the online Replicat group

GGSCI (devu007) 7> ADD REPLICAT rep1, EXTTRAIL /u01/oracle/software/goldengate/dirdat/rt
REPLICAT added.

Note that the EXTTRAIL location which is on the target local system conforms to the RMTTRAIL parameter which we used when we created the parameter file for the extract process on the source system.

Create a parameter file for the online Replicat group, rep1

GGSCI (devu007) 8> EDIT PARAMS rep1

REPLICAT rep1
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
MAP scott.emp, TARGET sh.emp;

ON SOURCE

Start the Extract process

GGSCI (redhat346.localdomain) 16> START EXTRACT ext1

Sending START request to MANAGER …
EXTRACT EXT1 starting

GGSCI (redhat346.localdomain) 17> STATUS EXTRACT ext1
EXTRACT EXT1: RUNNING

GGSCI (redhat346.localdomain) 16> INFO EXTRACT ext1

EXTRACT EXT1 Last Started 2010-02-08 14:27 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:09 ago)
Log Read Checkpoint Oracle Redo Logs
2010-02-08 14:27:48 Seqno 145, RBA 724480

ON TARGET

Start the Replicat process

GGSCI (devu007) 1> START REPLICAT rep1
Sending START request to MANAGER …
REPLICAT REP1 starting

GGSCI (devu007) 2> INFO REPLICAT rep1

REPLICAT REP1 Last Started 2010-02-08 14:55 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint File /u01/oracle/software/goldengate/dirdat/rt000001
2010-02-08 14:27:57.600425 RBA 1045

Note: the trail file has a prefix of ‘rt’ (which we had defined earlier)

LET US NOW TEST …

ON SOURCE

SQL> conn scott/tiger
Connected.

SQL> UPDATE emp SET sal=9999 WHERE ename=’KING’;

1 row updated.

SQL> COMMIT;

Commit complete.

ON TARGET

SQL> SELECT SAL FROM emp WHERE ename=’KING’;

SAL
———-
9999

Coming Next! – configuring GoldenGate Data Pump …..

Oracle GoldenGate Tutorial 6 – configuring Data Pump process

$
0
0

The Data Pump (not to be confused with the Oracle Export Import Data Pump) is an optional secondary Extract group that is created on the source system. When Data Pump is not used, the Extract process writes to a remote trail that is located on the target system using TCP/IP. When Data Pump is configured, the Extract process writes to a local trail and from here Data Pump will read the trail and write the data over the network to the remote trail located on the target system.

The advantages of this can be seen as it protects against a network failure as in the absence of a storage device on the local system, the Extract process writes data into memory before the same is sent over the network. Any failures in the network could then cause the Extract process to abort (abend). Also if we are doing any complex data transformation or filtering, the same can be performed by the Data Pump. It will also be useful when we are consolidating data from several sources into one central target where data pump on each individual source system can write to one common trail file on the target.

Create the Extract process

GGSCI (devu007) 1> ADD EXTRACT ext1, TRANLOG, BEGIN NOW
EXTRACT added.

Create a local trail

Using the ADD EXTRAIL command we will now create a local trail on the source system where the Extract process will write to and which is then read by the Data Pump process. We will link this local trail to the Primary Extract group we just created, ext1

GGSCI (devu007) 3> ADD EXTTRAIL /u01/oracle/software/goldengate/dirdat/lt, EXTRACT ext1
EXTTRAIL added.

Create the Data Pump group

On the source system create the Data Pump group and using the EXTTRAILSOURCE keywork specify the location of the local trail which will be read by the Data Pump process

GGSCI (devu007) 4> ADD EXTRACT dpump, EXTTRAILSOURCE /u01/oracle/software/goldengate/dirdat/lt
EXTRACT added.

Create the parameter file for the Primary Extract group

GGSCI (devu007) 5> EDIT PARAMS ext1

“/u01/oracle/software/goldengate/dirprm/ext1.prm” [New file]

EXTRACT ext1
USERID ggs_owner, PASSWORD ggs_owner
EXTTRAIL /u01/oracle/software/goldengate/dirdat/lt
TABLE MONITOR.WORK_PLAN;

Specify the location of the remote trail on the target system

Use the RMTTRAIL to specify the location of the remote trail and associate the same with the Data Pump group as it will be wriiten to over the network by the data pump process

GGSCI (devu007) 6> ADD RMTTRAIL /u01/oracle/ggs/dirdat/rt, EXTRACT dpump
RMTTRAIL added.

Create the parameter file for the Data Pump group

Note- the parameter PASSTHRU signifies the mode being used for the Data Pump which means that the names of the source and target objects are identical and no column mapping or filtering is being performed here.

GGSCI (devu007) 2> EDIT PARAMS dpump

“/u01/oracle/software/goldengate/dirprm/dpump.prm” [New file]

EXTRACT dpump
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST redhat346, MGRPORT 7809
RMTTRAIL /u01/oracle/ggs/dirdat/rt
PASSTHRU
TABLE MONITOR.WORK_PLAN;

ON TARGET SYSTEM

Create the Replicat group

The EXTTRAIL clause indicates the location of the remote trail and should be the same as the RMTTRAIL value that was used when creating the Data Pump process on the source system.

GGSCI (redhat346.localdomain) 2> ADD REPLICAT rep1, EXTTRAIL /u01/oracle/ggs/dirdat/rt
REPLICAT added.

Create the parameter file for the Replicat group

GGSCI (redhat346.localdomain) 3> EDIT PARAMS rep1

REPLICAT rep1
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
MAP MONITOR.WORK_PLAN, TARGET MONITOR.WORK_PLAN;

ON SOURCE

On the source system, now start the Extract and Data Pump processes.

GGSCI (devu007) 3> START EXTRACT ext1

Sending START request to MANAGER …
EXTRACT EXT1 starting

GGSCI (devu007) 4> START EXTRACT dpump

Sending START request to MANAGER …
EXTRACT DPUMP starting

GGSCI (devu007) 5> info extract ext1

EXTRACT EXT1 Last Started 2010-02-18 11:23 Status RUNNING
Checkpoint Lag 00:40:52 (updated 00:00:09 ago)
Log Read Checkpoint Oracle Redo Logs
2010-02-18 10:42:19 Seqno 761, RBA 15086096

GGSCI (devu007) 6> INFO EXTRACT dpump

EXTRACT DPUMP Last Started 2010-02-18 11:23 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:02 ago)
Log Read Checkpoint File /u01/oracle/software/goldengate/dirdat/lt000000
2010-02-18 11:15:10.000000 RBA 5403

Note- the data pump process is reading from the Local Trail file – /u01/oracle/software/goldengate/dirdat/lt000000

ON TARGET SYSTEM

Start the Replicat process

GGSCI (redhat346.localdomain) 4> START REPLICAT rep1

Sending START request to MANAGER …
REPLICAT REP1 starting

GGSCI (redhat346.localdomain) 5> STATUS REPLICAT rep1
REPLICAT REP1: RUNNING

Coming Next! – DDL change synchronization …

Oracle GoldenGate Tutorial 7 – configuring DDL synchronization

$
0
0

In addition to providing replication support for all DML statements, we can also configure the GoldenGate environment to provide DDL support as well.

A number of prerequisite setup tasks need to be performed which we willl highlight here.

Run the following scripts from the directory where the GoldenGate software was installed.

The assumption here is that the database user GGS_OWNER has already been created and granted the required roles and privileges as discussed in our earlier tutorial.


Note - run the scripts as SYSDBA

SQL> @marker_setup

Marker setup script

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_OWNER


Marker setup table script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGS_OWNER

MARKER TABLE
-------------------------------
OK

MARKER SEQUENCE
-------------------------------
OK

Script complete.



SQL> alter session set recyclebin=OFF;
Session altered.


SQL> @ddl_setup

GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: On Oracle 10g and up, system recycle bin must be disabled.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_OWNER

You will be prompted for the mode of installation.
To install or reinstall DDL replication, enter INITIALSETUP
To upgrade DDL replication, enter NORMAL
Enter mode of installation:INITIALSETUP

Working, please wait ...
Spooling to file ddl_setup_spool.txt


Using GGS_OWNER as a GoldenGate schema name, INITIALSETUP as a mode of installation.

Working, please wait ...

RECYCLEBIN must be empty.
This installation will purge RECYCLEBIN for all users.
To proceed, enter yes. To stop installation, enter no.

Enter yes or no:yes


DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGS_OWNER

DDLORA_GETTABLESPACESIZE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

CLEAR_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

CREATE_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

TRACE_PUT_LINE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

INITIAL_SETUP STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLVERSIONSPECIFIC PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL HISTORY TABLE
-----------------------------------
OK

DDL HISTORY TABLE(1)
-----------------------------------
OK

DDL DUMP TABLES
-----------------------------------
OK

DDL DUMP COLUMNS
-----------------------------------
OK

DDL DUMP LOG GROUPS
-----------------------------------
OK

DDL DUMP PARTITIONS
-----------------------------------
OK

DDL DUMP PRIMARY KEYS
-----------------------------------
OK

DDL SEQUENCE
-----------------------------------
OK

GGS_TEMP_COLS
-----------------------------------
OK

GGS_TEMP_UK
-----------------------------------
OK

DDL TRIGGER CODE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL TRIGGER INSTALL STATUS
-----------------------------------
OK

DDL TRIGGER RUNNING STATUS
-----------------------------------
ENABLED

STAYMETADATA IN TRIGGER
-----------------------------------
OFF

DDL TRIGGER SQL TRACING
-----------------------------------
0

DDL TRIGGER TRACE LEVEL
-----------------------------------
0

LOCATION OF DDL TRACE FILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/gavin/gavin/trace/ggs_ddl_trace.log

Analyzing installation status...


STATUS OF DDL REPLICATION
--------------------------------------------------------------------------------
SUCCESSFUL installation of DDL Replication software components

Script complete.
SQL>




SQL> @role_setup

GGS Role setup script

This script will drop and recreate the role GGS_GGSUSER_ROLE
To use a different role name, quit this script and then edit the params.sql script to change
the gg_role parameter to the preferred name. (Do not run the script.)

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_OWNER
Wrote file role_setup_set.txt

PL/SQL procedure successfully completed.


Role setup script complete

Grant this role to each user assigned to the Extract, GGSCI, and Manager processes, by using the following SQL command:

GRANT GGS_GGSUSER_ROLE TO

where  is the user assigned to the GoldenGate processes.


SQL> grant ggs_ggsuser_role to ggs_owner;

Grant succeeded.


SQL> @ddl_enable

Trigger altered.



SQL> @ddl_pin GGS_OWNER

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Turn Recyclebin OFF

We need to set the parameter recyclebin to OFF via the ALTER SYSTEM SET RECYCLEBIN=OFF command in order to prevent this error which we will see if we try and configure DDL support and then start the Extract process.

2010-02-19 11:13:30 GGS ERROR 2003 RECYCLEBIN must be turned off. For 10gr2 and up, set RECYCLEBIN in parameter file to OFF. For 10gr1, set _RECYCLEBI
N in parameter file to FALSE. Then restart database and extract.
2010-02-19 11:13:30 GGS ERROR 190 PROCESS ABENDING.

Enable additional logging at the table level

Note- We had earlier enabled additional supplemental logging at the database level. Using the ADD TRANDATA command we now enable it at even the table level as this is required by GoldenGate for DDL support.

GGSCI (redhat346.localdomain) 5> DBLOGIN USERID ggs_owner, PASSWORD ggs_owner
Successfully logged into database.

GGSCI (redhat346.localdomain) 6> ADD TRANDATA scott.emp

Logging of supplemental redo data enabled for table SCOTT.EMP.

Edit the parameter file for the Extract process to enable DDL synchronization

We had earlier created a parameter file for an Extract process ext1. We now edit that parameter file and add the entry
DDL INCLUDE MAPPED

This means that DDL support is now enabled for all tables which have been mapped and in this case it will only apply to the SCOTT.EMP table as that is the only table which is being processed here. We can also use the INCLUDE ALL or EXCLUDE ALL or wildcard characters to specify which tables to enable the DDL support for.

GGSCI (redhat346.localdomain) 1> EDIT PARAM EXT1

EXTRACT ext1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST 10.53.100.100, MGRPORT 7809
RMTTRAIL /u01/oracle/software/goldengate/dirdat/rt
DDL INCLUDE MAPPED
TABLE scott.emp;

Test the same

We will now alter the structure of the EMP table by adding a column and we can see that this new table structure is also reflected on the target system.

On Source

SQL> ALTER TABLE EMP ADD NEW_COL VARCHAR2(10);
Table altered.

On Target

SQL> desc emp
Name Null? Type
—————————————– ——– —————————-
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(20)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
MYCOL VARCHAR2(10)
NEW_COL VARCHAR2(10)

Coming Next! – Filtering Data and Data manipulation and transformation

Oracle Goldengate Tutorial 8 – Filtering and Mapping data

$
0
0

Oracle GoldenGate not only provides us a replication solution that is Oracle version independent as well as platform independent, but we can also use it to do data transformation and data manipulation between the source and the target.

So we can use GoldenGate when the source and database database differ in table structure as well as an ETL tool in a Datawarehouse type environment.

We will discuss below two examples to demonstrate this feature – column mapping and filtering of data.

In example 1, we will filter the records that are extracted on the source and applied on the target – only rows where the JOB column value equals ‘MANAGER” in the MYEMP table will be considered for extraction.

In example 2, we will deal with a case where the table structure is different between the source database and the target database and see how column mapping is performed in such cases.

Example 1

Initial load of all rows which match the filter from source to target. The target database MYEMP table will only be populated with rows from the EMP table where filter criteria of JOB=’MANAGER’ is met.

On Source

GGSCI (redhat346.localdomain) 4> add extract myload1, sourceistable
EXTRACT added.

GGSCI (redhat346.localdomain) 5> edit params myload1

EXTRACT myload1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST devu007, MGRPORT 7809
RMTTASK replicat, GROUP myload1
TABLE scott.myemp, FILTER (@STRFIND (job, “MANAGER”) > 0);

On Target

GGSCI (devu007) 2> add replicat myload1, specialrun
REPLICAT added.

GGSCI (devu007) 3> edit params myload1

“/u01/oracle/software/goldengate/dirprm/myload1.prm” [New file]
REPLICAT myload1
USERID ggs_owner, PASSWORD ggs_owner
ASSUMETARGETDEFS
MAP scott.myemp, TARGET sh.myemp;

On Source – start the initial load extract

GGSCI (redhat346.localdomain) 6> start extract myload1

Sending START request to MANAGER …
EXTRACT MYLOAD1 starting

On SOURCE

SQL> select count(*) from myemp;

COUNT(*)
———-
14

SQL> select count(*) from myemp where job=’MANAGER’;

COUNT(*)
———-
9

On TARGET

SQL> select count(*) from myemp where job=’MANAGER’;

COUNT(*)
———-
9

Create an online change extract and replicat group using a Filter

GGSCI (redhat346.localdomain) 10> add extract myload2, tranlog, begin now
EXTRACT added.

GGSCI (redhat346.localdomain) 11> add rmttrail /u01/oracle/software/goldengate/dirdat/bb, extract myload2
RMTTRAIL added.

GGSCI (redhat346.localdomain) 11> edit params myload2

EXTRACT myload2
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST 10.53.200.225, MGRPORT 7809
RMTTRAIL /u01/oracle/software/goldengate/dirdat/bb
TABLE scott.myemp, FILTER (@STRFIND (job, “MANAGER”) > 0);

On Target

GGSCI (devu007) 2> add replicat myload2, exttrail /u01/oracle/software/goldengate/dirdat/bb
REPLICAT added.

GGSCI (devu007) 3> edit params myload2

“/u01/oracle/software/goldengate/dirprm/myload2.prm” [New file]
REPLICAT myload2
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
MAP scott.myemp, TARGET sh.myemp;

On Source – start the online extract group

GGSCI (redhat346.localdomain) 13> start extract myload2

Sending START request to MANAGER …
EXTRACT MYLOAD2 starting

GGSCI (redhat346.localdomain) 14> info extract myload2

EXTRACT MYLOAD2 Last Started 2010-02-23 11:04 Status RUNNING
Checkpoint Lag 00:27:39 (updated 00:00:08 ago)
Log Read Checkpoint Oracle Redo Logs
2010-02-23 10:36:51 Seqno 214, RBA 103988

On Target

GGSCI (devu007) 4> start replicat myload2

Sending START request to MANAGER …
REPLICAT MYLOAD2 starting

GGSCI (devu007) 5> info replicat myload2

REPLICAT MYLOAD2 Last Started 2010-02-23 11:05 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Log Read Checkpoint File /u01/oracle/software/goldengate/dirdat/bb000000
First Record RBA 989

On Source we now insert two rows into the MYEMP table – one which has the JOB value of ‘MANAGER’ and the other row which has the job value of ‘SALESMAN’


On SOURCE

SQL> INSERT INTO MYEMP
2 (empno,ename,job,sal)
3 VALUES
4 (1234,’GAVIN’,’MANAGER‘,10000);

1 row created.

SQL> commit;

Commit complete.

SQL> INSERT INTO MYEMP
2 (empno,ename,job,sal)
3 VALUES
4 (1235,’BOB’,’SALESMAN‘,1000);

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from myemp;
COUNT(*)
———-
16

SQL> select count(*) from myemp where job=’MANAGER’;

COUNT(*)
———-
10

On Target, we will see that even though two rows have been inserted into the source MYEMP table, on the target MYEMP table only one row is inserted because the filter has been applied which only includes the rows where the JOB value equals ‘MANAGER’.

SQL> select count(*) from myemp;

COUNT(*)
———-
10

Example 2 – source and target table differ in column structure

In the source MYEMP table we have a column named SAL whereas on the target, the same MYEMP table has the column defined as SALARY.

Create a definitions file on the source using DEFGEN utility and then copy that definitions file to the target system

GGSCI (redhat346.localdomain) > EDIT PARAMS defgen

DEFSFILE /u01/oracle/ggs/dirsql/myemp.sql
USERID ggs_owner, PASSWORD ggs_owner
TABLE scott.myemp;

[oracle@redhat346 ggs]$ ./defgen paramfile /u01/oracle/ggs/dirprm/defgen.prm

***********************************************************************
Oracle GoldenGate Table Definition Generator for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 18 2009 00:09:13

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

Starting at 2010-02-23 11:22:17
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Wed Dec 17 11:41:38 EST 2008, Release 2.6.18-128.el5
Node: redhat346.localdomain
Machine: x86_64
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited

Process id: 14175

***********************************************************************
** Running with the following parameters **
***********************************************************************
DEFSFILE /u01/oracle/ggs/dirsql/myemp.sql
USERID ggs_owner, PASSWORD *********
TABLE scott.myemp;
Retrieving definition for SCOTT.MYEMP

Definitions generated for 1 tables in /u01/oracle/ggs/dirsql/myemp.sql

If we were to try and run the replicat process on the target without copying the definitions file, we will see an error as shown below which pertains to the fact that the columns in the source and target database are different and GoldenGate is not able to resolve that.

2010-02-23 11:31:07 GGS WARNING 218 Aborted grouped transaction on ‘SH.MYEMP’, Database error 904 (ORA-00904: “SAL”: invalid identifier).

2010-02-23 11:31:07 GGS WARNING 218 SQL error 904 mapping SCOTT.MYEMP to SH.MYEMP OCI Error ORA-00904: “SAL”: invalid identifier (status = 904), SQL .

We then ftp the definitions file from the source to the target system – in this case to the dirsql directory located in the top level GoldenGate installed software directory

We now go and make a change to the original replicat parameter file and change the parameter ASSUMEDEFS to SOURCEDEFS which provides GoldenGate with the location of the definitions file.

The other parameter which is included is the COLMAP parameter which tells us how the column mapping has been performed. The ‘USEDEFAULTS’ keyword denotes that all the other columns in both tables are identical except for the columns SAL and SALARY which differ in both tables and now we are mapping the SAL columsn in source to the SALARY column on the target.

REPLICAT myload2
SOURCEDEFS /u01/oracle/software/goldengate/dirsql/myemp.sql
USERID ggs_owner, PASSWORD ggs_owner
MAP scott.myemp, TARGET sh.myemp,
COLMAP (usedefaults,
salary = sal);

We now go and start the originall replicat process myload2 which had abended because of the column mismatch (which has now been corrected via the parameter change) and we see that the process now is running without any error.

now go and start the process which had failed after table modification

GGSCI (devu007) 2> info replicat myload2

REPLICAT MYLOAD2 Last Started 2010-02-23 11:05 Status ABENDED
Checkpoint Lag 00:00:03 (updated 00:11:44 ago)
Log Read Checkpoint File /u01/oracle/software/goldengate/dirdat/bb000000
2010-02-23 11:31:03.999504 RBA 1225

GGSCI (devu007) 3> start replicat myload2

Sending START request to MANAGER …
REPLICAT MYLOAD2 starting

GGSCI (devu007) 4> info replicat myload2

REPLICAT MYLOAD2 Last Started 2010-02-23 11:43 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
Log Read Checkpoint File /u01/oracle/software/goldengate/dirdat/bb000000
2010-02-23 11:31:03.999504 RBA 1461

Coming Next! – Monitoring the GoldenGate environment …..

Oracle GoldenGate Tutorial 9 – Monitoring GoldenGate

$
0
0

The following tutorial will briefly discuss the different commands we can use to monitor the GoldenGate environment and get statistics and reports on various extract and replicat operations which are in progress.

More details can be obtained from Chapter 19 of the Oracle GoldenGate Windows and Unix Administration guide – Monitoring GoldenGate processing.

Information on all GoldenGate processes running on a system


GGSCI (devu007) 21> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     DPUMP       00:00:00      00:00:04
EXTRACT     RUNNING     EXT1        00:00:00      00:00:09
EXTRACT     RUNNING     EXT2        00:00:00      00:00:07
EXTRACT     ABENDED     GAVIN       00:00:00      73:29:25
EXTRACT     STOPPED     WORKPLAN    00:00:00      191:44:18
REPLICAT    RUNNING     MYLOAD2     00:00:00      00:00:09
REPLICAT    RUNNING     MYREP       00:00:00      00:00:08


Find the run status of a particular process

GGSCI (devu007) 23> status manager

Manager is running (IP port devu007.7809).

GGSCI (devu007) 24> status extract ext1
EXTRACT EXT1: RUNNING


Detailed information of a particular process


GGSCI (devu007) 6> info extract ext1, detail

EXTRACT    EXT1      Last Started 2010-02-19 11:19   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2010-02-26 10:45:18  Seqno 786, RBA 44710400

  Target Extract Trails:

  Remote Trail Name                                Seqno        RBA     Max MB

  /u01/oracle/software/goldengate/dirdat/lt            2      55644         10

  Extract Source                          Begin             End

  /u02/oradata/apex/redo03.log            2010-02-19 11:13  2010-02-26 10:45
  /u02/oradata/apex/redo02.log            2010-02-19 11:04  2010-02-19 11:13
  /u02/oradata/apex/redo02.log            2010-02-18 10:42  2010-02-19 11:04
  Not Available                           * Initialized *   2010-02-18 10:42


Current directory    /u01/oracle/software/goldengate

Report file          /u01/oracle/software/goldengate/dirrpt/EXT1.rpt
Parameter file       /u01/oracle/software/goldengate/dirprm/ext1.prm
Checkpoint file      /u01/oracle/software/goldengate/dirchk/EXT1.cpe
Process file         /u01/oracle/software/goldengate/dirpcs/EXT1.pce
Stdout file          /u01/oracle/software/goldengate/dirout/EXT1.out
Error log            /u01/oracle/software/goldengate/ggserr.log

Monitoring an Extract recovery 


GGSCI (devu007) 35> send extract ext1 status

Sending STATUS request to EXTRACT EXT1 ...


  EXTRACT EXT1 (PID 1925238)
  Current status: Recovery complete: At EOF
  Sequence #: 786
  RBA: 40549888
  Timestamp: 2010-02-26 09:59:57.000000

  Output trail #1
  Current write position:
  Sequence #: 2
  RBA: 55644
  Timestamp: 2010-02-26 09:59:54.337574
  Extract Trail: /u01/oracle/software/goldengate/dirdat/lt


Monitoring processing volume - Statistics of the operations processed 

GGSCI (devu007) 33> stats extract ext1

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2010-02-26 09:58:27.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                  19.00
        Mapped operations                            2.00
        Unmapped operations                          9.00
        Other operations                             8.00
        Excluded operations                         17.00

Output to /u01/oracle/software/goldengate/dirdat/lt:

Extracting from GGS_OWNER.GGS_MARKER to GGS_OWNER.GGS_MARKER:

*** Total statistics since 2010-02-19 11:21:03 ***

        No database operations have been performed.

*** Daily statistics since 2010-02-26 00:00:00 ***

        No database operations have been performed.

*** Hourly statistics since 2010-02-26 09:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2010-02-19 11:21:03 ***

        No database operations have been performed.

Extracting from MONITOR.WORK_PLAN to MONITOR.WORK_PLAN:

*** Total statistics since 2010-02-19 11:21:03 ***
        Total inserts                                4.00
        Total updates                               46.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                            50.00

*** Daily statistics since 2010-02-26 00:00:00 ***
        Total inserts                                0.00
        Total updates                               16.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                            16.00

*** Hourly statistics since 2010-02-26 09:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2010-02-19 11:21:03 ***
        Total inserts                                4.00
        Total updates                               46.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                            50.00

End of Statistics.


View processing rate - can use 'hr','min' or 'sec' as a parameter


GGSCI (devu007) 37> stats extract ext2 reportrate hr

Sending STATS request to EXTRACT EXT2 ...

Start of Statistics at 2010-02-26 10:04:46.

Output to /u01/oracle/ggs/dirdat/cc:

Extracting from SH.CUSTOMERS to SH.CUSTOMERS:

*** Total statistics since 2010-02-26 09:29:48 ***
        Total inserts/hour:                          0.00
        Total updates/hour:                      95258.62
        Total deletes/hour:                          0.00
        Total discards/hour:                         0.00
        Total operations/hour:                   95258.62

*** Daily statistics since 2010-02-26 09:29:48 ***
        Total inserts/hour:                          0.00
        Total updates/hour:                      95258.62
        Total deletes/hour:                          0.00
        Total discards/hour:                         0.00
        Total operations/hour:                   95258.62

*** Hourly statistics since 2010-02-26 10:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2010-02-26 09:29:48 ***
        Total inserts/hour:                          0.00
        Total updates/hour:                      95258.62
        Total deletes/hour:                          0.00
        Total discards/hour:                         0.00
        Total operations/hour:                   95258.62

End of Statistics.


View latency between the records processed by Goldengate and the timestamp in the data source


GGSCI (devu007) 13>  send extract ext2, getlag

Sending GETLAG request to EXTRACT EXT2 ...
Last record lag: 3 seconds.
At EOF, no more records to process.


GGSCI (devu007) 15> lag extract ext*

Sending GETLAG request to EXTRACT EXT1 ...
Last record lag: 1 seconds.
At EOF, no more records to process.

Sending GETLAG request to EXTRACT EXT2 ...
Last record lag: 1 seconds.
At EOF, no more records to process.

Viewing the GoldenGate error log as well as history of commands executed and other events

We can use the editor depending on operating system – vi on Unix for example to view the ggserr.log file which is located at the top level GoldenGate software installation directory.

We can also use the GGSCI command VIEW GGSEVT as well.

View the process report

Every Manager, Extract and Replicat process will generate a report file at the end of each run and this
report can be viewed to diagnose any problems or errors as well as view the parameters used, the environment variables is use, memory consumption etc

For example:

GGSCI (devu007) 2> view report ext1

GGSCI (devu007) 2> view report rep1

GGSCI (devu007) 2> view report mgr

Information on Child processes started by the Manager


GGSCI (devu007) 8> send manager childstatus

Sending CHILDSTATUS request to MANAGER ...

Child Process Status - 6 Entries

ID     Group     Process    Retry Retry Time            Start Time
----  --------  ----------  ----- ------------------    -----------
   0     EXT1     1925238      0  None                 2010/02/19 11:07:54
   1    DPUMP     2195496      0  None                 2010/02/19 11:08:02
   2   MSSQL1      422034      0  None                 2010/02/22 13:54:59
   4    MYREP     1302702      0  None                 2010/02/23 09:08:34
   6  MYLOAD2     1200242      0  None                 2010/02/23 11:05:01
   7     EXT2     2076844      0  None                 2010/02/26 08:29:22

Coming Next! – using GoldenGate to perform a 10g to 11g Cross Platform database upgrade and platform migration ….


Oracle GoldenGate Tutorial 10- performing a zero downtime cross platform migration and 11g database upgrade

$
0
0

This note briefly describes the steps required to perform a cross platform database migration (AIX to Red Hat Linux) and also a database upgrade from 10g to 11g Release 2 which is attained with zero downtime using a combination of RMAN, Cross Platform TTS and GoldenGate to achieve the same.

This is the environment that we will be referring to in this note:

10..2.0.4 Database on AIX – DB10g
10.2.0.4 Duplicate database on AIX – Clonedb
11.2 database on Linux – DB11g

Steps

1) Create the GoldenGate Extract process on source AIX DB10g and start the same. This extract process will be capturing changes as they occur on the 10g AIX database in the remote trail files located on the Linux target system. Since the Replicat process is not running on the target at this time, the source database changes will accumulate in the extract trail files.

GGSCI (devu026) 12> add extract myext, tranlog, begin now
EXTRACT added.

GGSCI (devu026) 13> add rmttrail /u01/oracle/ggs/dirdat/my, extract myext
RMTTRAIL added.

GGSCI (devu026) 14> edit params myext

“/u01/rapmd2/ggs/dirprm/myext.prm” 7 lines, 143 characters
EXTRACT myext
USERID ggs_owner, PASSWORD ggs_owner
SETENV (ORACLE_HOME = “/u01/oracle/product/10.2/rapmd2″)
SETENV (ORACLE_SID = “db10g”)
RMTHOST 10.1.210.35, MGRPORT 7809
RMTTRAIL /u01/oracle/ggs/dirdat/my
DISCARDFILE discard.txt, APPEND
TABLE sh.*;
TABLE hr.*;
TABLE pm.*;
TABLE oe.*;
TABLE ix.*;

START THE EXTRACT PROCESS NOW

GGSCI (devu026) 16> START EXTRACT MYEXT

Sending START request to MANAGER …
EXTRACT MYEXT starting

GGSCI (devu026) 17> INFO EXTRACT MYEXT

EXTRACT MYEXT Last Started 2010-03-04 08:42 Status RUNNING
Checkpoint Lag 00:31:07 (updated 00:00:01 ago)
Log Read Checkpoint Oracle Redo Logs
2010-03-04 08:11:26 Seqno 8, RBA 2763280

2) Using RMAN create a duplicate database in the source AIX environment (Clonedb) – this database will be used as the source for the export of database structure (no rows export) and tablespace meta data

Follow this white paper to get all the steps involved.

***********ON SOURCE – UPDATE 1**********

SQL> conn sh/sh
Connected.
SQL> update mycustomers set cust_city=’Singapore’;

55500 rows updated.

SQL> commit;

Commit complete.

3) Create a skeleton database on the Linux platform in the 11g Release 2 environment – DB11g

Note – we will then set up the GoldenGate user GGS_OWNER in the database and grant it the required privileges as well as create the checkpoint table. Read one of the earlier tutorials which details the set up of the GGS_OWNER user in the database.

4) Take a full export of the database without any table data to get just the structure of the database – this is now taken from the clonedb duplicate database created in step 2

db10g:/u01/oracle> expdp dumpfile=full_norows.dmp directory =dumpdir content=metadata_only exclude=tables,index full=y

Export: Release 10.2.0.4.0 – 64bit Production on Thursday, 04 March, 2010 9:02:44

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYS”.”SYS_EXPORT_FULL_01″: sys/******** AS SYSDBA dumpfile=full_norows.dmp directory =dumpdir content=metadata_only exclude=tables,index full=y
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
…………………
…………………….

5) Import the dumpfile into the 11g database DB11g which has the database structure without the table data – this will create all the users, roles, synonyms etc

We had to create a role and also create the directory before doing the full database import. Ignore he errors during the import as it will pertain to objects which already exist in the scratch database.

SQL> create role xdbwebservices;

Role created.

SQL> create directory dumpdir as ‘/u01/oracle’;

Directory created.

[oracle@redhat346 ~]$ impdp dumpfile=full_norows.dmp directory=dumpdir full=y

Import: Release 11.2.0.1.0 – Production on Mon Mar 8 13:09:16 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

…………
……….

6) On the clonedb database, we now will export the tablespace meta data – make the required tablespaces read only. Note that the original source 10g database is in read write mode and is being accessed by the users with no downtime as yet.

clonedb:/u01/rapmd2/ggs> expdp dumpfile=tts_meta.dmp directory =dumpdir transport_tablespaces=EXAMPLE,TTS

Export: Release 10.2.0.4.0 – 64bit Production on Monday, 08 March, 2010 13:01:38

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″: sys/******** AS SYSDBA dumpfile=tts_meta.dmp directory =dumpdir transport_tablespaces=EXAMPLE,TTS
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/oracle/tts_meta.dmp
Job “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at 13:02:17

7) Copy the datafiles from the read only tablespaces ( from clonedb) to the target Linux system and using RMAN convert the datafiles from the AIX platform to the Linux platform

RMAN> CONVERT DATAFILE ‘/u01/oracle/example01.dbf’
2> FROM PLATFORM=’AIX-Based Systems (64-bit)’
3> FORMAT ‘/u02/oradata/db11g/example01.dbf’;

Starting conversion at target at 08-MAR-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK
channel ORA_DISK_1: starting datafile conversion
input file name=/u01/oracle/example01.dbf
converted datafile=/u02/oradata/db11g/example01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished conversion at target at 08-MAR-10

RMAN> CONVERT DATAFILE ‘/u01/oracle/tts01.dbf’
2> FROM PLATFORM=’AIX-Based Systems (64-bit)’
3> FORMAT ‘/u02/oradata/db11g/tts01.dbf’;

Starting conversion at target at 08-MAR-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input file name=/u01/oracle/tts01.dbf
converted datafile=/u02/oradata/db11g/tts01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished conversion at target at 08-MAR-10

8) Import the tablespace meta data into the 11g database and plug in the tablespaces -make the tablespaces read write

[oracle@redhat346 ~]$ impdp dumpfile=tts_meta.dmp directory=dumpdir transport_datafiles=”/u02/oradata/db11g/example01.dbf”,”/u02/oradata/db11g/tts01.dbf”

Import: Release 11.2.0.1.0 – Production on Mon Mar 8 13:21:37 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_TRANSPORTABLE_01″: sys/******** AS SYSDBA dumpfile=tts_meta.dmp directory=dumpdir transport_datafiles=/u02/oradata/db11g/example01.dbf,/u02/oradata/db11g/tts01.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
ORA-31684: Object type TYPE:”PM”.”ADHEADER_TYP” already exists
ORA-31684: Object type TYPE:”PM”.”TEXTDOC_TYP” already exists
ORA-31684: Object type TYPE:”IX”.”ORDER_EVENT_TYP” already exists
ORA-31684: Object type TYPE:”OE”.”PHONE_LIST_TYP” already exists
ORA-31684: Object type TYPE:”OE”.”CUST_ADDRESS_TYP” already exists
ORA-31684: Object type TYPE:”PM”.”TEXTDOC_TAB” already exists
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
…………………..
……………………………..

SQL> alter tablespace tts read write;

Tablespace altered.

SQL> alter tablespace example read write;

Tablespace altered.

***********ON SOURCE – UPDATE 2**********

SQL> conn sh/sh
Connected.
SQL> update mycustomers set cust_city=’Hong Kong’;

55500 rows updated.

SQL> commit;

Commit complete.

Note:

As we make changes in the source database, the trail files on the target start getting populated. These are located in the destination we specified when creating the RMTTRAIL.

[oracle@redhat346 dirdat]$ pwd
/u01/oracle/ggs/dirdat

[oracle@redhat346 dirdat]$ ls -lrt

-rw-rw-rw- 1 oracle oinstall 9999950 Mar 8 09:41 gs000000
-rw-rw-rw- 1 oracle oinstall 9999641 Mar 8 09:41 gs000001
-rw-rw-rw- 1 oracle oinstall 9999629 Mar 8 10:00 gs000003
-rw-rw-rw- 1 oracle oinstall 9999724 Mar 8 10:00 gs000002
-rw-rw-rw- 1 oracle oinstall 9999741 Mar 8 10:00 gs000004
-rw-rw-rw- 1 oracle oinstall 2113226 Mar 8 10:00 gs000005
-rw-rw-rw- 1 oracle oinstall 9999791 Mar 8 10:35 rm000000
-rw-rw-rw- 1 oracle oinstall 9999721 Mar 8 10:35 rm000001
-rw-rw-rw- 1 oracle oinstall 9999249 Mar 8 10:49 rm000003
-rw-rw-rw- 1 oracle oinstall 9999309 Mar 8 10:49 rm000002
-rw-rw-rw- 1 oracle oinstall 9999818 Mar 8 10:49 rm000004
-rw-rw-rw- 1 oracle oinstall 9999430 Mar 8 10:49 rm000005
-rw-rw-rw- 1 oracle oinstall 9999412 Mar 8 10:49 rm000006
-rw-rw-rw- 1 oracle oinstall 9999588 Mar 8 10:54 rm000007
-rw-rw-rw- 1 oracle oinstall 9999481 Mar 8 10:54 rm000009
-rw-rw-rw- 1 oracle oinstall 9999399 Mar 8 10:54 rm000008
-rw-rw-rw- 1 oracle oinstall 9999787 Mar 8 10:54 rm000010
-rw-rw-rw- 1 oracle oinstall 9999770 Mar 8 10:57 rm000011
-rw-rw-rw- 1 oracle oinstall 9999941 Mar 8 10:57 rm000012
-rw-rw-rw- 1 oracle oinstall 9999913 Mar 8 10:57 rm000013
-rw-rw-rw- 1 oracle oinstall 9999429 Mar 8 11:09 rm000014
-rw-rw-rw- 1 oracle oinstall 9999812 Mar 8 11:09 rm000015
-rw-rw-rw- 1 oracle oinstall 9999240 Mar 8 11:09 rm000016
-rw-rw-rw- 1 oracle oinstall 9999454 Mar 8 11:09 rm000017
-rw-rw-rw- 1 oracle oinstall 9999914 Mar 8 11:09 rm000018
-rw-rw-rw- 1 oracle oinstall 9999820 Mar 8 11:16 rm000019
-rw-rw-rw- 1 oracle oinstall 9999766 Mar 8 11:16 rm000020
-rw-rw-rw- 1 oracle oinstall 9999706 Mar 8 12:56 rm000021
-rw-rw-rw- 1 oracle oinstall 9999577 Mar 8 12:56 rm000022
-rw-rw-rw- 1 oracle oinstall 9999841 Mar 8 12:56 rm000023
-rw-rw-rw- 1 oracle oinstall 9999890 Mar 8 13:26 rm000024
-rw-rw-rw- 1 oracle oinstall 9999604 Mar 8 13:26 rm000025
-rw-rw-rw- 1 oracle oinstall 9999536 Mar 8 13:26 rm000026
-rw-rw-rw- 1 oracle oinstall 918990 Mar 8 13:26 rm000027

9) On the target Linux environment now we create and start the GoldenGate Replicat process/processes. They will now start reading from the Extract trail files created in Step 1 and will start applying them to the 11g database.

GGSCI (redhat346.localdomain) 1> add replicat myrep, extrail /u01/oracle/ggs/dirdat/rm
REPLICAT added.

GGSCI (redhat346.localdomain) 6> edit params myrep

REPLICAT myrep
SETENV (ORACLE_HOME = “/u01/app/oracle/product/11.2.0/dbhome_1″)
SETENV (ORACLE_SID = “db11g”)
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
MAP sh.*, TARGET sh.*;
MAP pm.*, TARGET pm.*;
MAP oe.*, TARGET oe.*;
MAP hr.*, TARGET hr.*;
MAP ix.*, TARGET ix.*;

10) Once all the changes in the trail files have been applied by the Replicat process and we confirm that both source and target are in sync (we can use another GoldenGate product called Veridata for this), we can now point the users and application to the 11g Linux database with no or minimal downtime which will depend on the infrastructure.

We can see the Replicat process going through and reading all the trail files until it has completed processing all the files

GGSCI (redhat346.localdomain) 131> info replicat myrep

REPLICAT MYREP Last Started 2010-03-08 13:42 Status RUNNING
Checkpoint Lag 03:07:37 (updated 00:00:17 ago)
Log Read Checkpoint File /u01/oracle/ggs/dirdat/rm000002
2010-03-08 10:35:27.001328 RBA 6056361
…….
………..

GGSCI (redhat346.localdomain) 156> info replicat myrep

REPLICAT MYREP Last Started 2010-03-08 13:42 Status RUNNING
Checkpoint Lag 02:53:49 (updated 00:00:00 ago)
Log Read Checkpoint File /u01/oracle/ggs/dirdat/rm000007
2010-03-08 10:49:39.001103 RBA 2897635

………………
……………..

GGSCI (redhat346.localdomain) 133> info replicat myrep

REPLICAT MYREP Last Started 2010-03-08 13:48 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:07 ago)
Log Read Checkpoint File /u01/oracle/ggs/dirdat/rm000027
2010-03-08 13:26:43.000861 RBA 918990

GGSCI (redhat346.localdomain) 134> lag replicat myrep

Sending GETLAG request to REPLICAT MYREP …
Last record lag: 1363 seconds.
At EOF, no more records to process.

TEST!

Now check and confirm from the database if second update statement (UPDATE 2) run on the source database has been applied on the target

SQL> select distinct cust_city from mycustomers;

CUST_CITY
——————————
Hong Kong

We can now point our clients to the upgraded 11g database!

Coming next in the series! – Installing and configuring GoldenGate Director …..

Using GoldenGate for real time data integration – SQL Server to Oracle 11g

$
0
0

I would like to share a simple test case which explains how we can use GoldenGate to replicate data between a Microsoft SQL Server 2005 source and an Oracle 11g target database on a Red Hat Linux platform.

We can use a number of third party tools as well as Oracle’s SQL Developer to generate scripts to convert SQL Server DDL into Oracle compliant DDL – I will try and cover this conversion aspect in a future post.

In this case, I have created the table in the Oracle 11g database using the following CREATE TABLE statement. Note that while the column names are the same, the data types are different and to cater for this difference in the data types in both databases, we have to create a data definitions file.

Let as assume we have a DEPT table in the AdventureWorks database in the HumanResources schema.

The structure of the table in SQL Server 2005 is as follows:

CREATE TABLE [HumanResources].[dept](
[DepartmentID] [smallint] NOT NULL,
[Name] [dbo].[Name] NOT NULL,
[GroupName] [dbo].[Name] NOT NULL,
[ModifiedDate] [datetime] NOT NULL CONSTRAINT [DF_Department_ModifiedDate] DEFAULT (getdate()),
CONSTRAINT [PK_Dept_DepartmentID] PRIMARY KEY CLUSTERED
(
[DepartmentID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Let us now create the table in the GGS_OWNER schema in the target 11g database with the same structure.

Create the table in Oracle 11g database

SQL>CREATE TABLE dept
(departmentid number not null,
name varchar2(50),
groupname varchar2(50),
modifieddate date default sysdate)

SQL> /

Table created.

SQL> alter table dept add constraint
2 pk_dept primary key (departmentid);

Table altered.

We now enable additional logging for the DEPT table via the ADD TRANDATA command.

GGSCI (Dell-PC) 1> dblogin sourcedb sql2005
Successfully logged into database.

GGSCI (Dell-PC) 2> add trandata humanresources.dept

Logging of supplemental log data is enabled for table HumanResources.dept

Because the data types differ in SQL Server with Oracle, we need to create a Data Definition file using the defgen utility as shown below.

GGSCI (Dell-PC)> edit params defgen

defsfile d:\goldengate\dirdef\dept.def
sourcedb sql2005
table humanresources.dept;

D:\goldengate>defgen paramfile d:\goldengate\dirprm\defgen.prm

***********************************************************************
Oracle GoldenGate Table Definition Generator for ODBC
Version 10.4.0.19 Build 002
Windows x64 (optimized), Microsoft SQL Server on Sep 21 2009 09:40:36

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

Starting at 2010-06-20 14:48:46
***********************************************************************

Operating System Version:
Microsoft Windows 7 , on x64
Version 6.1 (Build 7600: )

Process id: 7900

***********************************************************************
** Running with the following parameters **
***********************************************************************
defsfile d:\goldengate\dirdef\dept.def
sourcedb sql2005
table humanresources.dept;
Retrieving definition for HUMANRESOURCES.DEPT

Definitions generated for 1 tables in d:\goldengate\dirdef\dept.def

We will now FTP or SCP this data definition file which was generated to the following directory on the target Linux machine.

Now FTP the file to the Linux machine in the GoldenGate directory

/home/oracle/goldengate/dirdef

We will now create the initial data extract process – initext

GGSCI (Dell-PC) 3> edit params initext

SOURCEISTABLE
SOURCEDB SQL2005
RMTHOST 192.168.10.94, MGRPORT 7809
RMTFILE /home/oracle/goldengate/dirdat/ex
TABLE humanresources.dept;

On the target, we will create the initial data load process – initrep

Note that since this is a one time operation we are using the keyword SPECIALRUN. We also include the keyword SOURCEDEFS to specify the data definitions file location – this is the file we had generated on the Windows source and had copied to the Linux target.

GGSCI (linux01.oncalldba.com) 2> edit params initrep

SPECIALRUN
END RUNTIME
USERID ggs_owner, PASSWORD ggs_owner
EXTFILE /home/oracle/goldengate/dirdat/ex
sourcedefs /home/oracle/goldengate/dirdef/dept.def
MAP humanresources.dept, TARGET ggs_owner.dept ;

Start the initial load job from the Windows command line in the GoldenGate directory

D:\goldengate>extract paramfile dirprm\initext.prm reportfile dirrpt\initext.rpt

***********************************************************************
Oracle GoldenGate Capture for ODBC
Version 10.4.0.19 Build 002
Windows x64 (optimized), Microsoft SQL Server on Sep 21 2009 09:42:03

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

Starting at 2010-06-20 15:07:55
***********************************************************************

Operating System Version:
Microsoft Windows 7 , on x64
Version 6.1 (Build 7600: )

Process id: 7204

Description:

***********************************************************************
** Running with the following parameters **
***********************************************************************
SOURCEISTABLE

2010-06-20 15:07:55 GGS INFO 414 Wildcard resolution set to IMMEDIATE b
ecause SOURCEISTABLE is used.
SOURCEDB SQL2005
RMTHOST 192.168.10.94, MGRPORT 7809
RMTFILE /home/oracle/goldengate/dirdat/ex
TABLE humanresources.dept;
Using the following key columns for source table HUMANRESOURCES.DEPT: Department
ID.

CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE: 64K
CACHESIZE: 4G
CACHEBUFFERSIZE (soft max): 4M
CACHEPAGEOUTSIZE (normal): 4M
PROCESS VM AVAIL FROM OS (min): 4.77G
CACHESIZEMAX (strict force to disk): 4.57G

Database Version:
Microsoft SQL Server
Version 09.00.4035
ODBC Version 03.80.0000

Driver Information:
SQLNCLI.DLL
Version 09.00.4035
ODBC Version 03.52

Database Language and Character Set:

Warning: Unable to determine the application and database codepage settings.
Please refer to user manual for more information.

2010-06-20 15:07:56 GGS INFO Z0-05M Output file /home/oracle/goldengate/di
rdat/ex is using format RELEASE 10.4.

2010-06-20 15:08:01 GGS INFO 406 Socket buffer size set to 27985 (flush
size 27985).

Processing table HUMANRESOURCES.DEPT

***********************************************************************
* ** Run Time Statistics ** *
***********************************************************************

Report at 2010-06-20 15:08:01 (activity since 2010-06-20 15:07:56)

Output to /home/oracle/goldengate/dirdat/ex:

From Table HUMANRESOURCES.DEPT:
# inserts: 5
# updates: 0
# deletes: 0
# discards: 0

Start the initial replicat process on the Linux machine

[oracle@linux01 goldengate]$ ./replicat paramfile dirprm/initrep.prm
***********************************************************************
Oracle GoldenGate Delivery for Oracle
Version 10.4.0.19 Build 002
Linux, x86, 32bit (optimized), Oracle 11 on Sep 29 2009 09:00:07

Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.

Starting at 2010-06-20 15:10:06
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Mon Mar 29 20:19:03 EDT 2010, Release 2.6.18-194.el5PAE
Node: linux01.oncalldba.com
Machine: i686
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited

Process id: 22748

Description:

***********************************************************************
** Running with the following parameters **
***********************************************************************
SPECIALRUN
END RUNTIME
USERID ggs_owner, PASSWORD *********
EXTFILE /home/oracle/goldengate/dirdat/ex
sourcedefs /home/oracle/goldengate/dirdef/dept.def
MAP humanresources.dept, TARGET ggs_owner.dept ;

CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE: 64K
CACHESIZE: 512M
CACHEBUFFERSIZE (soft max): 4M
CACHEPAGEOUTSIZE (normal): 4M
PROCESS VM AVAIL FROM OS (min): 1G
CACHESIZEMAX (strict force to disk): 881M

Database Version:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
PL/SQL Release 11.1.0.6.0 – Production
CORE 11.1.0.6.0 Production
TNS for Linux: Version 11.1.0.6.0 – Production
NLSRTL Version 11.1.0.6.0 – Production

Database Language and Character Set:
NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7ASCII.
NLS_LANGUAGE = “AMERICAN”
NLS_TERRITORY = “AMERICA”
NLS_CHARACTERSET = “AL32UTF8″

Warning: NLS_LANG is not set. Please refer to user manual for more information.
Opened trail file /home/oracle/goldengate/dirdat/ex at 2010-06-20 15:10:06

2010-06-20 15:10:06 GGS INFO 379 Positioning with begin time: Jan 1, 1970 12:00:00 AM, starting record time: Jun 20, 2010 3:20:57 PM at extrba 807.

***********************************************************************
** Run Time Messages **
***********************************************************************

Opened trail file /home/oracle/goldengate/dirdat/ex at 2010-06-20 15:10:06

MAP resolved (entry HUMANRESOURCES.DEPT):
MAP HUMANRESOURCES.DEPT, TARGET ggs_owner.dept ;
Using following columns in default map by name:
DEPARTMENTID, NAME, GROUPNAME, MODIFIEDDATE

Using the following key columns for target table GGS_OWNER.DEPT: DEPARTMENTID.

***********************************************************************
* ** Run Time Statistics ** *
***********************************************************************

Last record for the last committed transaction is the following:
___________________________________________________________________
Trail name : /home/oracle/goldengate/dirdat/ex
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 133 (x0085) IO Time : 2010-06-20 15:07:35.802212
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 0 AuditPos : 0
Continued : N (x00) RecCount : 1 (x01)

2010-06-20 15:07:35.802212 Insert Len 133 RBA 1517
Name: HUMANRESOURCES.DEPT
___________________________________________________________________

Reading /home/oracle/goldengate/dirdat/ex, current RBA 1720, 5 records

Report at 2010-06-20 15:10:07 (activity since 2010-06-20 15:10:07)

From Table HUMANRESOURCES.DEPT to GGS_OWNER.DEPT:
# inserts: 5
# updates: 0
# deletes: 0
# discards: 0

Last log location read:
FILE: /home/oracle/goldengate/dirdat/ex
RBA: 1720
TIMESTAMP: 2010-06-20 15:07:35.802212
EOF: NO
READERR: 400

We will now connect as ggs_owner in the target Oracle database and we can find that there are now 5 rows in tge DEPT table.

SQL> select * from dept;

DEPARTMENTID NAME                 GROUPNAME            MODIFIEDD
------------ -------------------- -------------------- ---------
           1 Sales                Marketing            20-JUN-10
           2 Networks             IT Infrastructure    20-JUN-10
           3 Help Desk            IT Support           20-JUN-10
           4 DBA Oracle           IT Infrastructure    20-JUN-10
           5 Unix System Admin    IT Infrastructure    20-JUN-10

Now that we have configured the initial data load, we can create the extract and replicat process to enable online change synchronization.

Create the Extract process on Source (Windows)

GGSCI (Dell-PC) 2> ADD EXTRACT myext, TRANLOG, BEGIN NOW
EXTRACT added.

GGSCI (Dell-PC) 3> ADD RMTTRAIL /home/oracle/goldengate/dirdat/my, EXTRACT myex

RMTTRAIL added.

GGSCI (Dell-PC) 6> edit params myext

EXTRACT myext
sourcedb sql2005
TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
RMTHOST 192.168.10.94, MGRPORT 7809
RMTTRAIL /home/oracle/goldengate/dirdat/my

Create the Replicat process on Target (Linux)

GGSCI (linux01.oncalldba.com) 1> ADD REPLICAT myrep, EXTTRAIL /home/oracle/goldengate/dirdat/my
REPLICAT added.

GGSCI (linux01.oncalldba.com) 4> edit params myrep

REPLICAT myrep
sourcedefs /home/oracle/goldengate/dirdef/dept.def
USERID ggs_owner, PASSWORD ggs_owner
MAP humanresources.dept, TARGET ggs_owner.dept ;

Start the Extract on Source

GGSCI (Dell-PC) 7> start extract myext

Sending START request to MANAGER (‘GGSMGR’) …
EXTRACT MYEXT starting

GGSCI (Dell-PC) 8> info extract myext
EXTRACT MYEXT Last Started 2010-06-20 16:17 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:05:49 ago)
VAM Read Checkpoint 2010-06-20 16:11:45.664000

Start the Replicat on Target

GGSCI (linux01.oncalldba.com) 5> start replicat myrep

Sending START request to MANAGER …
REPLICAT MYREP starting

GGSCI (linux01.oncalldba.com) 6> info replicat myrep

REPLICAT MYREP Last Started 2010-06-20 16:17 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:05 ago)
Log Read Checkpoint File /home/oracle/goldengate/dirdat/my000000
First Record RBA 825

On Source SQL Server insert two more rows

BEGIN TRAN
INSERT INTO HUMANRESOURCES.DEPT
(DEPARTMENTID,NAME,GROUPNAME,MODIFIEDDATE)
VALUES
(6,’Enterprise Monitoring’,'Operations’,’20-JUN-2010′)
COMMIT tran

BEGIN TRAN
INSERT INTO HUMANRESOURCES.DEPT
(DEPARTMENTID,NAME,GROUPNAME,MODIFIEDDATE)
VALUES
(7,’PC Support’,'I.T Support’,’20-JUN-2010′)
COMMIT tran

We now see that the Extract process on the Windows machine has extracted these two inserts as well

GGSCI (Dell-PC) 10> stats extract myext

Sending STATS request to EXTRACT MYEXT ...

Start of Statistics at 2010-06-20 16:22:08.

Output to /home/oracle/goldengate/dirdat/my:

Extracting from HUMANRESOURCES.DEPT to HUMANRESOURCES.DEPT:

*** Total statistics since 2010-06-20 16:19:50 ***
        Total inserts                                2.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             2.00

On Target Oracle 11g, we will see that these two records have been inserted in the table!

SQL> select count(*) from dept;

  COUNT(*)
----------
         7

SQL> select * from dept;

DEPARTMENTID NAME                 GROUPNAME            MODIFIEDD
------------ -------------------- -------------------- ---------
           1 Sales                Marketing            20-JUN-10
           2 Networks             IT Infrastructure    20-JUN-10
           3 Help Desk            IT Support           20-JUN-10
           4 DBA Oracle           IT Infrastructure    20-JUN-10
           5 Unix System Admin    IT Infrastructure    20-JUN-10
           6 Enterprise Monitorin Operations           20-JUN-10
             g

           7 PC Support           I.T Support          20-JUN-10

7 rows selected.

GoldenGate – using FILTER, COMPUTE and SQLEXEC commands

$
0
0

Some time back I had posted a note on column mapping and data transformation using GoldenGate.

Here are some more examples of column mapping and manipulating data using keywords like SQLPREDICATE, COMPUTE, FILTER and I will also introduce another powerful GoldenGate command called SQLEXEC – which we will discuss in detail at a later date.

SQLPREDICATE

Enables us to provide a WHERE clause to select rows for an initial load. This will be included in the Extract parameter file as part of the TABLE clause as shown below.

The GoldenGate reference guide has this to say ….

“SQLPREDICATE is a better selection method for initial loads than the WHERE or FILTER options.It is much faster because it affects the SQL statement directly and does not require GoldenGate to fetch all records before filtering them, like those other options do.”

TABLE ggs_owner.emp_details, SQLPREDICATE “where ename=’Gavin’”;

We can also perform the filter on the Replicat side by only selecting a subset of the data which has been extracted by using the WHERE clause as part or the Replicat parameter file as shown below.

MAP ggs_owner.emp_details, TARGET ggs_owner.emp_details, WHERE (ename=”Gavin”);

FILTER

The FILTER clause offers us more functionality than the WHERE clause because you can employ any of GoldenGate’s column conversion functions to filter data, whereas the WHERE clause accepts basic WHERE operators.

For example we can use standard arithmetic operators like ‘+’,'-’,'/’,'*’ or comparison operators like ‘>’,’<', '=' as well as GoldenGate functions like @COMPUTE, @DATE, @STRFIND, @STRNUM etc

For example we can use the STRFIND function as part of the Extract parameter file to only extract records from the table that match a particular string value as shown below.

TABLE ggs_owner.emp_details,FILTER (@STRFIND (ename, “Gavin”) > 0);

COMPUTE

In this example we will use the GoldenGate function @COMPUTE to derive the values for a column in a table based on values in some other column in the same table.

We will also see how a column mapping is performed on the target side where the target table EMP has an additional column COMM which is not present in the source table. We will derive the values for the COMM column by using a arithmetical calculation where the COMM is the SAL value plus 10%.

Remember we have to first create a definitions file using the defgen command as the source and target tables differ in structure.

In this case we will generate the definitions file on the target GoldenGate environment as the target table has the additional column COMM which is not present in the source EMP table.

edit params defgen

DEFSFILE /home/oracle/goldengate/dirsql/emp.sql
USERID ggs_owner, PASSWORD ggs_owner
TABLE ggs_owner.emp;

We then run this on the Target goldengate location

[oracle@linux02 goldengate]$ ./defgen paramfile /home/oracle/goldengate/dirprm/defgen.prm

The replicat parameter file will have the following contents – note the combination of the COLMAP and COMPUTE – one will tell Goldengate how to map the difference in the table structure and the other will execute a computation on the SAL column to derive data for the COMM column. Remember, the USEDEFAULTS term means that all the other columns other than COMM are identically matched on both source as well as target tables.

REPLICAT rep1
USERID ggs_owner, PASSWORD *********
SOURCEDEFS /home/oracle/goldengate/dirsql/emp.sql
MAP ggs_owner.emp_details, TARGET ggs_owner.emp_details,
COLMAP (usedefaults,
comm= @compute(sal +sal *.10));

After running the extract on the source, we will see that the EMP table has been populated on the target database and the column COMM has been derived as well from the SAL column.

SQL> select * from emp;

     EMPNO ENAME                    DEPTNO        SAL       COMM
---------- -------------------- ---------- ---------- ----------
      1001 Gavin                        10       1000       1100
      1002 Mark                         20       2000       2200
      1003 John                         30       3000       3300

SQLEXEC

SQLEXEC can be used as part of the Extract or Replicat process to make database calls which enables Goldengate to use the native SQL of the database to execute SQL queries, database commands as well as stored procedures and functions.

For example, as part of a large batch data load process, we would like to drop the indexes first and then rebuild them after the data load is complete. We see how when included as part of this replicat parameter file we are dropping and rebuilding an index using native SQL commands.

REPLICAT rep1
USERID ggs_owner, PASSWORD ggs_owner
ASSUMETARGETDEFS
sqlexec “drop index loc_ind”;
MAP ggs_owner.emp_details, TARGET ggs_owner.emp_details, WHERE (location=”Sydney”);
sqlexec “create index loc_ind on emp_details(location)”;

GoldenGate – Online Change Synchronization with the initial data load

$
0
0

Recently a question was posed to me as to how do we handle changes that are happening to the data while the initial data load extract process is in operation. Sometimes it may not be possible to have an application outage just to peform an initial data load and in most cases we will need to perform the initial data load using GoldenGate while users are connected to the database and changes are being made to the database via the application.

In my earlier tutorials we have discussed how to perform an initial data load and how to perform subsequent change synchronization to keep the data in sync. In this case we will just combine both those procedures into one.

So broadly speaking the steps would be :

  • Create the initial data load extract process or group
  • Create the online change synchronization extract process or group
  • Create the initial replicat group
  • Create the online online change synchronization replicat group
  • Start the online change synchronization extract process
  • Start the initial data load extract process
  • When the initial data load process has completed start the online change replicat group
  • Let us see a simple example where we will be peforming an initial data load of the MYOBJECTS table (copy of DBA_OBJECTS) and while the initial data load extract process is running and loading the 70,000 odd rows, we will from another session update the table while the data load is in progress. We will then see how these changes are also replicated to the target.

    Source

    SQL> select count(*) from myobjects;
    
      COUNT(*)
    ----------
         71338
    

    Target

    SQL> select count(*) from myobjects;
    
      COUNT(*)
    ----------
             0
    

    Note – we will see that we have used the HANDLECOLLISIONS keyword in the replicat parameter file. If the source database will remain active during the initial load, include the HANDLECOLLISIONS parameter in the Replicat parameter file.

    HANDLECOLLISIONS accounts for collisions that occur during the overlap of time between the initial load and the ongoing change replication. It reconciles insert operations for which the row already exists, and it reconciles update and delete operations.

    We will turn this off after the initial data load has been completed.

    Create the Initial Data Load Extract and Replicat Processes

    Source

    GGSCI (linux01.oncalldba.com) 20>  ADD EXTRACT ext1, SOURCEISTABLE
    EXTRACT added.
    
    GGSCI (linux01.oncalldba.com) 21> edit params ext1
    
    EXTRACT ext1
    USERID ggs_owner, PASSWORD ggs_owner
    RMTHOST 192.168.10.194, MGRPORT 7809
    RMTTASK replicat, GROUP rep1
    TABLE ggs_owner.myobjects;
    

    Target

    GGSCI (linux02.oncalldba.com) 7> add replicat rep1, specialrun
    REPLICAT added.
    
    GGSCI (linux02.oncalldba.com) 8> edit params rep1
    
    REPLICAT rep1
    HANDLECOLLISIONS
    USERID ggs_owner, PASSWORD ggs_owner
    ASSUMETARGETDEFS
    MAP ggs_owner.myobjects, TARGET ggs_owner.myobjects;
    

    Create the Online Change Synchronization Extract and Replicat Processes

    Source

    GGSCI (linux01.oncalldba.com) 8> ADD EXTRACT ext2, TRANLOG, BEGIN NOW
    EXTRACT added.
    
    GGSCI (linux01.oncalldba.com) 13> ADD RMTTRAIL /home/oracle/goldengate/dirdat/zz, EXTRACT ext2
    RMTTRAIL added.
    
    GGSCI (linux01.oncalldba.com) 9> edit params ext2
    
    EXTRACT ext2
    USERID ggs_owner, PASSWORD  ggs_owner
    RMTHOST 192.168.10.194, MGRPORT 7809
    RMTTRAIL /home/oracle/goldengate/dirdat/zz
    TABLE ggs_owner.myobjects;
    

    Target

    GGSCI (linux02.oncalldba.com) 2> add replicat rep2, exttrail  /home/oracle/goldengate/dirdat/zz
    REPLICAT added.
    
    GGSCI (linux02.oncalldba.com) 4>  edit params rep2
    
    REPLICAT rep2
    HANDLECOLLISIONS
    ASSUMETARGETDEFS
    USERID ggs_owner, PASSWORD ggs_owner
    MAP ggs_owner.myobjects, TARGET ggs_owner.myobjects ;
    

    Start the Online Change Extract EXT2

    GGSCI (linux01.oncalldba.com) 14> start extract ext2
    
    Sending START request to MANAGER ...
    EXTRACT EXT2 starting
    
    GGSCI (linux01.oncalldba.com) 15> info extract ext2
    
    EXTRACT    EXT2      Last Started 2010-07-13 14:15   Status RUNNING
    Checkpoint Lag       00:00:00 (updated 00:03:50 ago)
    Log Read Checkpoint  Oracle Redo Logs
                         2010-07-13 14:11:57  Seqno 149, RBA 19817488
    

    Start the Initial Load Extract

    GGSCI (linux01.oncalldba.com) 39> start extract ext1
    
    Sending START request to MANAGER ...
    EXTRACT EXT1 starting
    
    GGSCI (linux01.oncalldba.com) 41> info extract ext1
    
    EXTRACT    EXT1      Last Started 2010-07-13 14:34   Status RUNNING
    Checkpoint Lag       Not Available
    Log Read Checkpoint  Table GGS_OWNER.MYOBJECTS
                         2010-07-13 14:34:42  Record 2548
    
    

    While the Initial Load Extract is in progress make some changes in the database

    SQL> update myobjects set owner='GAVIN' where owner='SYS';
    
    30001 rows updated.
    
    SQL> commit;
    
    Commit complete.
    

    When the initial extract process has loaded all the rows, it will stop and so will the initial replicat process

    GGSCI (linux01.oncalldba.com) 39> info extract ext1
    
    EXTRACT    EXT1      Last Started 2010-07-13 15:01   Status RUNNING
    Checkpoint Lag       Not Available
    Log Read Checkpoint  Table GGS_OWNER.MYOBJECTS
                         2010-07-13 15:04:22  Record 61964
    Task                 SOURCEISTABLE
    
    
    GGSCI (linux01.oncalldba.com) 40>  info extract ext1
    
    EXTRACT    EXT1      Last Started 2010-07-13 15:01   Status STOPPED
    Checkpoint Lag       Not Available
    Log Read Checkpoint  Table GGS_OWNER.MYOBJECTS
                         2010-07-13 15:04:44  Record 71307
    Task                 SOURCEISTABLE
    

    On Target

    GGSCI (linux02.oncalldba.com) 15> send replicat rep1 getlag
    
    ERROR: REPLICAT REP1 not currently running.
    

    We will now start the online change replicat process. This will apply all the changes which have occurred during the initial data load. Note that once the replicat process has finished applying all the changes that are stored in the trail files (which have been written to by the extract process running on source) we will see the “At EOF, no more records ….”

    GGSCI (linux02.oncalldba.com) 6> start replicat rep2
    
    Sending START request to MANAGER ...
    REPLICAT REP2 starting
    
    GGSCI (linux02.oncalldba.com) 7> info replicat rep2
    
    REPLICAT   REP2      Last Started 2010-07-13 15:06   Status RUNNING
    Checkpoint Lag       00:00:00 (updated 00:05:47 ago)
    Log Read Checkpoint  File /home/oracle/goldengate/dirdat/zz000000
                         First Record  RBA 0
    
    GGSCI (linux02.oncalldba.com) 8> send replicat rep2 getlag
    
    Sending GETLAG request to REPLICAT REP2 ...
    Last record lag: 222 seconds.
    
    GGSCI (linux02.oncalldba.com) 9> send replicat rep2 getlag
    
    Sending GETLAG request to REPLICAT REP2 ...
    Last record lag: 233 seconds.
    
    GGSCI (linux02.oncalldba.com) 10> send replicat rep2 getlag
    
    Sending GETLAG request to REPLICAT REP2 ...
    Last record lag: 238 seconds.
    At EOF, no more records to process.
    

    Let us now check if both the initial data load and the updates have been propagated and applied on the target side.

    SQL> select count(*) from myobjects;
    
      COUNT(*)
    ----------
         71307
    
    SQL> select count(*) from myobjects where owner='GAVIN';
    
      COUNT(*)
    ----------
         30000
    

    Now remove the HANDLECOLLISIONS clause …

    GGSCI (linux02.oncalldba.com) 13> send replicat rep2,nohandlecollisions
    
    Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ...
    REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries
    

    Also remove the line from the replicat parameter file via the “edit params replicat rep2 command”

    Oracle GoldenGate – how to connect to a particular database if there are multiple databases on the source or target server

    $
    0
    0

    I have been asked this question quite often in this forum regarding how do we connect to GoldenGate if there are a number of databases running on the source server or on the target server. Or there could be a case where we have installed GoldenGate for Oracle 10g and we have both 10g as well as 11g Oracle Homes on the same machine and we want to connect to the Oracle 10g environment in particular.

    Very valid question and a very simple answer for that.

    1) Either set the right environment using .oraenv
    or
    2) Specify the right TNS alias in the Manager, Extract or Replicat parameter file where we have used the USERID keyword

    For example, just to illustrate the same, I wrongly set the ORACLE_SID variable to a nonexistent value. The manager process will not start since it will try to connect with the username and password used in the manager parameter file to the LOCAL database for which the environment has been now set up and since the ORACLE_SID is wrong, there is no database running on the server with that SID.

    $ export ORACLE_SID=xyz

    GGSCI (vixen) 2> start manager

    Manager started.

    GGSCI (vixen) 3> status manager

    Manager is DOWN!

    This is what we will see in the GoldenGate logs:

    2010-08-19 11:04:37 GGS ERROR 182 OCI Error beginning session (status = 1034-ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    SVR4 Error: 2: No such file or directory).

    2010-08-19 11:04:37 GGS ERROR 190 PROCESS ABENDING.

    To use a TNS alias, make sure that we are able to do a tnsping as well as connect via SQL*PLUS using that alias before we launch GGSCI.

    In the relevant parameter file (in this example an extract parameter file) we just add the TNS alias to the USERID as shown below:

    USERID ggs_owner@levengr2, PASSWORD ggs_owner

    If the TNS connection is not defined properly, we can expect to see an error like this in the log file:

    2010-08-19 11:09:46 GGS ERROR 182 OCI Error during OCIServerAttach (status = 12154-ORA-12154: TNS:could not resolve the connect identifier specified)
    .
    2010-08-19 11:09:46 GGS ERROR 190 PROCESS ABENDING
    .

    We can also use the same TNS alias method when we use DBLOGIN keyword to establish a database connection via GoldenGate

    GGSCI (devastator) 2> dblogin userid ggs_owner@levengr2, password ggs_owner
    Successfully logged into database.

    Performing a GoldenGate Upgrade to version 11.2

    $
    0
    0

    In one of my earlier posts I had described how to handle GoldenGate version differences between the source and target environments. In that case my source was version 11.2 and the target was version 11.1.  We had to use the FORMAT RELEASE parameter to handle such a version difference.

     

    http://gavinsoorma.com/2012/06/using-the-format-release-parameter-to-handle-goldengate-version-differences/

     

    Let us now look at an example of how to upgrade the existing  target 11.1 environment to GoldenGate version 11.2.1.0.

    Note – in my case, the source was already running on version 11.2 and we only had to upgrade the target from 11.1 to 11.2

    But the same process should apply if we are upgrading both the source as well as target GoldenGate environments.

     

    Take a backup of existing GoldenGate 11.1 software directory

    [oracle@pdemora062rhv app]$ cp -fR goldengate goldengate_11.1

     Create a new directory for GG 11.2 software

     [oracle@pdemora062rhv app]$ mkdir goldengate_11.2

     

     Check the extract status

     

    Ensure that all our extract processes on the source have been completed.

    In GGSCI on the source system, issue the SEND EXTRACT command with the LOGEND option until it shows there is no more redo data to capture.

    In this case we see that some transactions are still bein processed:

    GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 9> send extract testext logend

     Sending LOGEND request to EXTRACT TESTEXT …

    We run the same command again and we now see that the extract process has finished processing records.

    GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 11> send extract testext logend

     Sending LOGEND request to EXTRACT TESTEXT …

    YES.

     

    A good practice is to make a note of the redo log file currently being read from. You may need archive logs from this point if you receive any error on extract startup after upgrade.

     

    GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 10> send extract testext showtrans

     Sending SHOWTRANS request to EXTRACT TESTEXT …

    No transactions found

     Oldest redo log file necessary to restart Extract is:

     Redo Log Sequence Number 235, RBA 10052624.

     

    On the target site, check the Replicat status.

    Ensure that all the Replicat groups have completed processing. In the case below, we see that one of the replicat processes is still active and not yet complete.

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 4> send replicat testrep status

     Sending STATUS request to REPLICAT TESTREP …

      Current status: Processing data

      Sequence #: 1

      RBA: 4722168

      39491 records in current transaction

     

    Now we see that the process has completed …

     

     GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 5>  send replicat testrep status

     Sending STATUS request to REPLICAT TESTREP …

      Current status: At EOF

      Sequence #: 1

      RBA: 6927789

      0 records in current transaction

     

    Stop the extract and manager processes on source

     

     GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 12> stop extract testext

     Sending STOP request to EXTRACT TESTEXT …

    Request processed.

    GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 13> stop manager

    Manager process is required by other GGS processes.

    Are you sure you want to stop it (y/n)? y

     Sending STOP request to MANAGER …

    Request processed.

    Manager stopped.

     

     Stop the replicat and manager processes on target

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 6> stop replicat testrep

     Sending STOP request to REPLICAT TESTREP …

    Request processed.

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 7> stop manager

    Manager process is required by other GGS processes.

    Are you sure you want to stop it (y/n)? y

     Sending STOP request to MANAGER …

    Request processed.

    Manager stopped.

     

    Note – if upgrading the source and if have configured DDL support, we need to disable the DDL trigger by running the ddl_disable script from the Oracle GoldenGate directory on the source system.

     

    SQL> conn sys as sysdba

    Enter password:

    Connected.

    SQL> @ddl_disable

     Trigger altered.

     

    Now unzip the 11.2 GoldenGate software.

     

    [oracle@pdemora062rhv goldengate]$ cd ../goldengate_11.2

    [oracle@pdemora062rhv goldengate_11.2]$ ls

    V32400-01.zip

    [oracle@pdemora062rhv goldengate_11.2]$ unzip *.zip

    Archive:  V32400-01.zip

      inflating: fbo_ggs_Linux_x64_ora10g_64bit.tar

       inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf

      inflating: Oracle GoldenGate 11.2.1.0.1 README.txt

      inflating: Oracle GoldenGate 11.2.1.0.1 README.doc

     

    [oracle@pdemora062rhv goldengate_11.2]$ tar -xvf fbo_ggs_Linux_x64_ora10g_64bit.tar

     

    Now copy the contents of the unzipped 11.2 directory to the existing 11.1 GoldenGate software location.

    Note – we are not touching our other existing 11.1 GG sub-directories like dirprm and dirdat. They still have our 11.1 version files.

     

    [oracle@pdemora062rhv goldengate_11.2]$ cp -fR * /u01/app/goldengate

     

    Let us now test the GoldenGate software version. Note the warning pertains to the parameter ENABLEMONITORAGENT which we had enabled for GoldenGate Monitor is now deprecated in version 11.2. We can sort that out later.

    So we can see that we are now using the version 11.2.1.0 GoldenGate software binaries.

     

    [oracle@pdemora062rhv goldengate]$ ./ggsci

     2012-06-23 06:41:17  WARNING OGG-00254  ENABLEMONITORAGENT is a deprecated parameter.

     Oracle GoldenGate Command Interpreter for Oracle

    Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO

    Linux, x64, 64bit (optimized), Oracle 10g on Apr 23 2012 07:30:46

     

    If we are upgrading the Target GoldenGate environment which is in my case, we have also to go and upgrade the Checkpoint Table as the 11.2 table structure is slightly different to the 11.1 structure.

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 1> dblogin userid ggs_owner, password ggs_owner

    Successfully logged into database.

     GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 2> upgrade checkpointtable ggs_owner.chkptab

     Successfully upgraded checkpoint table ggs_owner.chkptab.

     

    Note:

    This portion will now apply if we are upgrading the source GoldenGate environment and we would like to upgrade the DDL support as well to the 11.2 version.

     

    SQL> conn sys as sysdba
    Enter password:
    Connected.
    
    SQL> @ddl_disable
    
    Trigger altered.
    SQL> @ddl_remove
    
    DDL replication removal script.
    WARNING: this script removes all DDL replication objects and data.
    
    You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
    NOTE: The schema must be created prior to running this script.
    
    Enter Oracle GoldenGate schema name:GGS_OWNER
    Working, please wait ...
    Spooling to file ddl_remove_spool.txt
    
    Script complete.
    
    SQL> @marker_remove
    
    Marker removal script.
    WARNING: this script removes all marker objects and data.
    
    You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
    NOTE: The schema must be created prior to running this script.
    
    Enter Oracle GoldenGate schema name:GGS_OWNER
    
    PL/SQL procedure successfully completed.
    Sequence dropped.
    Table dropped.
    Script complete.
    SQL> @marker_setup
    
    Marker setup script
    
    You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
    NOTE: The schema must be created prior to running this script.
    NOTE: Stop all DDL replication before starting this installation.
    
    Enter Oracle GoldenGate schema name:GGS_OWNER
    Marker setup table script complete, running verification script...
    Please enter the name of a schema for the GoldenGate database objects:
    Setting schema name to GGS_OWNER
    
    MARKER TABLE
    -------------------------------
    OK
    
    MARKER SEQUENCE
    -------------------------------
    OK
    
    Script complete.
    SQL>
    SQL> @ddl_setup
    
    Oracle GoldenGate DDL Replication setup script
    
    Verifying that current user has privileges to install DDL Replication...
    
    You will be prompted for the name of a schema for the Oracle GoldenGate database                                                                                                                                                              objects.
    NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Ora                                                                                                                                                             cle 11g and later, it can be enabled.
    NOTE: The schema must be created prior to running this script.
    NOTE: Stop all DDL replication before starting this installation.
    
    Enter Oracle GoldenGate schema name:GGS_OWNER
    
    Working, please wait ...
    Spooling to file ddl_setup_spool.txt
    
    Checking for sessions that are holding locks on Oracle Golden Gate metadata tabl                                                                                                                                                             es ...
    
    Check complete.
    
    Using GGS_OWNER as a Oracle GoldenGate schema name.
    
    Working, please wait ...
    
    RECYCLEBIN must be empty.
    This installation will purge RECYCLEBIN for all users.
    To proceed, enter yes. To stop installation, enter no.
    
    Enter yes or no:yes
    DDL replication setup script complete, running verification script...
    Please enter the name of a schema for the GoldenGate database objects:
    Setting schema name to GGS_OWNER
    
    CLEAR_TRACE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    CREATE_TRACE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    TRACE_PUT_LINE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    INITIAL_SETUP STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDLVERSIONSPECIFIC PACKAGE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDLREPLICATION PACKAGE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDLREPLICATION PACKAGE BODY STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDL IGNORE TABLE
    -----------------------------------
    OK
    
    DDL IGNORE LOG TABLE
    -----------------------------------
    OK
    
    DDLAUX  PACKAGE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDLAUX PACKAGE BODY STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    SYS.DDLCTXINFO  PACKAGE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    SYS.DDLCTXINFO  PACKAGE BODY STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDL HISTORY TABLE
    -----------------------------------
    OK
    
    DDL HISTORY TABLE(1)
    -----------------------------------
    OK
    
    DDL DUMP TABLES
    -----------------------------------
    OK
    
    DDL DUMP COLUMNS
    -----------------------------------
    OK
    
    DDL DUMP LOG GROUPS
    -----------------------------------
    OK
    
    DDL DUMP PARTITIONS
    -----------------------------------
    OK
    
    DDL DUMP PRIMARY KEYS
    -----------------------------------
    OK
    
    DDL SEQUENCE
    -----------------------------------
    OK
    
    GGS_TEMP_COLS
    -----------------------------------
    OK
    
    GGS_TEMP_UK
    -----------------------------------
    OK
    
    DDL TRIGGER CODE STATUS:
    
    Line/pos                                 Error
    ---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
    No errors                                No errors
    
    DDL TRIGGER INSTALL STATUS
    -----------------------------------
    OK
    
    DDL TRIGGER RUNNING STATUS
    --------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
    ENABLED
    
    STAYMETADATA IN TRIGGER
    --------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
    OFF
    
    DDL TRIGGER SQL TRACING
    --------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
    0
    
    DDL TRIGGER TRACE LEVEL
    --------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
    0
    
    LOCATION OF DDL TRACE FILE
    --------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
    /u01/app/oracle/oracle/product/10.2.0/db_1/admin/db10g/udump/ggs_ddl_trace.log
    
    Analyzing installation status...
    STATUS OF DDL REPLICATION
    --------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
    SUCCESSFUL installation of DDL Replication software components
    
    Script complete.
    
    SQL> @role_setup
    
    GGS Role setup script
    
    This script will drop and recreate the role GGS_GGSUSER_ROLE
    To use a different role name, quit this script and then edit the params.sql script to change the gg_role parameter to the preferred name. (Do not run the script.)
    
    You will be prompted for the name of a schema for the GoldenGate database objects.
    NOTE: The schema must be created prior to running this script.
    NOTE: Stop all DDL replication before starting this installation.
    
    Enter GoldenGate schema name:GGS_OWNER
    Wrote file role_setup_set.txt
    
    PL/SQL procedure successfully completed.
    Role setup script complete
    
    Grant this role to each user assigned to the Extract, GGSCI, and Manager processes, by using the following SQL command:
    
    GRANT GGS_GGSUSER_ROLE TO <loggedUser>
    
    where <loggedUser> is the user assigned to the GoldenGate processes.
    SQL> GRANT GGS_GGSUSER_ROLE TO ggs_owner;
    
    Grant succeeded.
    
    SQL> @ddl_enable
    
    Trigger altered.

     

    We now start the Manager on both source as well as target.

    In my example my earlier configuration was source 11.2 and target 11.1,

    So I had to use the FORMAT RELEASE 11.1 parameter in my extract parameter file to handle the version difference.

    But now both source as well as target are GoldenGate version 11.2, so I can remove the FORMAT RELEASE from my extract parameter file.

    I also want the extract to start writing to a new Trail file and use 11.2 version instead as Replicat now is also on the same 11.2 version. I use the ETROLLOVER to force a new trail file sequence and then start the extract process.

     

    GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 21> alter extract testext etrollover

     

    We also have to instruct the Replicat process that it needs to start reading from a new 11.2 version trail file via the ALTER REPLICAT command. After that we can start the replicat process.

     

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 8> alter replicat testrep extseqno 3

    REPLICAT altered.

     

     GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 9>  start replicat testrep

     Sending START request to MANAGER …

    REPLICAT TESTREP starting

     

     

    GoldenGate IGNOREDELETES, IGNOREUPDATES and using the LOGDUMP utility

    $
    0
    0

    Some time back I was asked the question as to how do we use GoldenGate in a situation where on the target database we only want to capture records inserted in the source database and ignore any updates being made to existing rows in the source database.

    For this we can use the IGNOREUPDATES parameter which is valid for both the Extract as well as Replicat parameter files to inform GoldenGate to selectively ignore any update operations. This parameter is table specific and will apply to all tables mentioned in the subsequent TABLE or MAP statements until the GETUPDATES parameter is used. Note that GETUPDATES is the default. 

    In this example we will also see how delete operations on source database are ignored using the IGNOREDELETES parameter.

    Let us create a simple table on both source as well as target database with the following structure:

     

    SQL> create table mytab
      2  (id number, comments varchar2(20));

    Table created.

    SQL> alter table mytab add constraint pk_mytab  primary key (id);

    Table altered.

     

    We then create the extract process Testext on source and replicat process Testrep on target.

    This is our Extract parameter file:

    extract testext
    userid ggs_owner, password ggs_owner
    rmthost 10.32.20.62, mgrport 7809
    rmttrail /u01/app/goldengate/dirdat/gg
    table sh.mytab;

     

    This is our Replicat parameter file:

    REPLICAT testrep
    ASSUMETARGETDEFS
    USERID ggs_owner,PASSWORD ggs_owner
    IGNOREDELETES
    IGNOREUPDATES
    MAP SH.MYTAB, TARGET SH.MYTAB;

     

    Let us now test the same by inserting a row into the source table

     

    SQL> insert into mytab
      2   values
      3  (1,’INSERTED row’);

    1 row created.

    SQL> commit;

     

    Then check the target table for the inserted row.

     

    SQL> select * from mytab;

            ID COMMENTS
    ———- ——————–
             1 INSERTED row

     

    We now go and update the existing row on the target.

     

    SQL> update mytab
      2  set comments=’UPDATED row’
      3  where id=1;

    1 row updated.

    SQL> commit;

    SQL>  select * from mytab;

            ID COMMENTS
    ———- ——————–
                  1 UPDATED row

     

    On the target, we see that the update to the row has not happened on the target database.

     

    SQL> select * from mytab;

            ID COMMENTS
    ———- ——————–
             1 INSERTED row

     

    Let us now delete the existing record on the source database.

     

    SQL> delete mytab;

    1 row deleted.

    SQL> commit;

    Commit complete.

     

    Check the target. We see that the row has not been deleted from the target database.

     

    SQL> select * from mytab;

            ID COMMENTS
    ———- ——————–
             1 INSERTED row

     

    On the source GoldenGate environment let us examine the statistics for the Extract process. We see that 3 operations have happened.  This is made up of one insert, update and delete operation.

     

     GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 66> stats extract testext

    Sending STATS request to EXTRACT TESTEXT …

    Start of Statistics at 2012-07-21 04:51:36.

    Output to /u01/app/goldengate/dirdat/gg:

    Extracting from SH.MYTAB to SH.MYTAB:

    *** Total statistics since 2012-07-21 04:48:33 ***
            Total inserts                                      1.00
            Total updates                                      1.00
            Total deletes                                      1.00
            Total discards                                     0.00
            Total operations                                   3.00

     

    On the target however we see that only one single Insert operation has taken place.

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 27> stats replicat testrep

    Sending STATS request to REPLICAT TESTREP …

    Start of Statistics at 2012-07-21 04:52:35.

    Replicating from SH.MYTAB to SH.MYTAB:

    *** Total statistics since 2012-07-21 04:48:29 ***
            Total inserts                                      1.00
            Total updates                                      0.00
            Total deletes                                      0.00
            Total discards                                     0.00
            Total operations                                   1.00

     

    Ok. Now the source table has no rows while the target table has one row.

    What happens when we insert two rows into the source table?

     

    SQL>  insert into mytab
      2   values
      3  (1,’INSERTED row’);

    1 row created.

    SQL>  insert into mytab
      2   values
      3   (2,’INSERTED row’);

    1 row created.

    SQL> commit;

    Commit complete.

     

    Since the row with ID=1 already existed in the target database (because it was not deleted when the delete happened on the source), the subsequent insert fails and we see this error in the replicat log file.

     

    2012-07-21 04:54:17  WARNING OGG-00869  OCI Error ORA-00001: unique constraint (SH.PK_MYTAB) violated (status = 1). INSERT INTO “SH”.”MYTAB” (“ID”,”COMMENTS”) VALUES (:a0,:a1).

    2012-07-21 04:54:17  WARNING OGG-01004  Aborted grouped transaction on ‘SH.MYTAB’, Database error 1 (OCI Error ORA-00001: unique constraint (SH.PK_MYTAB) violated (status = 1).

     

    We need to tell the Replicat process that it needs to ignore the insert for the row which already exists and for this purpose we use the GoldenGate utility Logdump to examine the contents of the trail files.

    We then find the RBA (Relative Byte Address) for the second insert (ID=2) and will use that RBA to tell the Replicat process to start processing not from the beginning of the trail but from a point in the trail file indicated by the RBA value which we will provide to the ALTER REPLICAT command.

     We navigate through the trail file using the ‘n’ command until we find the record where the ID=2.

    We can see the first INSERT, then the UPDATE and then the DELETE operation. We then see the second INSERT which we are interested in.

     

    [oracle@pdemora062rhv goldengate]$ logdump

    Oracle GoldenGate Log File Dump Utility for Oracle
    Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230

    Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

     

    Logdump 41 >open /u01/app/goldengate/dirdat/gg000000
    Current LogTrail is /u01/app/goldengate/dirdat/gg000000
    Logdump 42 >ghdr on
    Logdump 43 >detail on
    Logdump 44 >n

    2012/07/21 04:45:13.522.696 FileHeader           Len  1087 RBA 0
    Name: *FileHeader*
     3000 01cd 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0…0…GG..TL..1…
     0003 3200 0004 2000 0000 3300 0008 02f1 eb7c 6dc9 | ..2… …3……|m.
     9208 3400 003f 003d 7572 693a 7064 656d 6f72 6130 | ..4..?.=uri:pdemora0
     3631 7268 763a 6173 6764 656d 6f3a 6173 6767 726f | 61rhv:asgdemo:asggro
     7570 3a63 6f6d 3a61 753a 3a75 3031 3a61 7070 3a67 | up:com:au::u01:app:g
     6f6c 6465 6e67 6174 6536 0000 2500 232f 7530 312f | oldengate6..%.#/u01/
     6170 702f 676f 6c64 656e 6761 7465 2f64 6972 6461 | app/goldengate/dirda

    Logdump 45 >n
    ___________________________________________________________________
    Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
    UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
    RecLength  :    29  (x001d)   IO Time    : 2012/07/21 04:48:15.066.791
    IOType     :     5  (x05)     OrigNode   :   255  (xff)
    TransInd   :     .  (x03)     FormatType :     R  (x52)
    SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
    AuditRBA   :        401       AuditPos   : 32627632
    Continued  :     N  (x00)     RecCount   :     1  (x01)

    2012/07/21 04:48:15.066.791 Insert               Len    29 RBA 1095
    Name: SH.MYTAB
    After  Image:                                             Partition 4   G  s
     0000 0005 0000 0001 3100 0100 1000 0000 0c49 4e53 | ……..1……..INS
     4552 5445 4420 726f 77                            | ERTED row
    Column     0 (x0000), Len     5 (x0005)
    Column     1 (x0001), Len    16 (x0010)

    Logdump 46 >n
    ___________________________________________________________________
    Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
    UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
    RecLength  :    28  (x001c)   IO Time    : 2012/07/21 04:50:37.094.598
    IOType     :    15  (x0f)     OrigNode   :   255  (xff)
    TransInd   :     .  (x03)     FormatType :     R  (x52)
    SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
    AuditRBA   :        401       AuditPos   : 33013776
    Continued  :     N  (x00)     RecCount   :     1  (x01)

    2012/07/21 04:50:37.094.598 FieldComp            Len    28 RBA 1235
    Name: SH.MYTAB
    After  Image:                                             Partition 4   G  s
     0000 0005 0000 0001 3100 0100 0f00 0000 0b55 5044 | ……..1……..UPD
     4154 4544 2072 6f77                               | ATED row
    Column     0 (x0000), Len     5 (x0005)
    Column     1 (x0001), Len    15 (x000f)

    Logdump 47 >n
    ___________________________________________________________________
    Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
    UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)
    RecLength  :     9  (x0009)   IO Time    : 2012/07/21 04:51:00.119.007
    IOType     :     3  (x03)     OrigNode   :   255  (xff)
    TransInd   :     .  (x03)     FormatType :     R  (x52)
    SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
    AuditRBA   :        401       AuditPos   : 33041936
    Continued  :     N  (x00)     RecCount   :     1  (x01)

    2012/07/21 04:51:00.119.007 Delete               Len     9 RBA 1374
    Name: SH.MYTAB
    Before Image:                                             Partition 4   G  s
     0000 0005 0000 0001 31                            | ……..1
    Column     0 (x0000), Len     5 (x0005)

    Logdump 48 >n
    ___________________________________________________________________
    Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
    UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
    RecLength  :    29  (x001d)   IO Time    : 2012/07/21 04:54:12.124.542
    IOType     :     5  (x05)     OrigNode   :   255  (xff)
    TransInd   :     .  (x03)     FormatType :     R  (x52)
    SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
    AuditRBA   :        401       AuditPos   : 33273872
    Continued  :     N  (x00)     RecCount   :     1  (x01)

    2012/07/21 04:54:12.124.542 Insert               Len    29 RBA 1494
    Name: SH.MYTAB
    After  Image:                                             Partition 4   G  s
     0000 0005 0000 0001 3100 0100 1000 0000 0c49 4e53 | ……..1……..INS
     4552 5445 4420 726f 77                            | ERTED row
    Column     0 (x0000), Len     5 (x0005)
    Column     1 (x0001), Len    16 (x0010)

    Logdump 49 >n
    ___________________________________________________________________
    Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
    UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
    RecLength  :    29  (x001d)   IO Time    : 2012/07/21 05:02:04.056.719
    IOType     :     5  (x05)     OrigNode   :   255  (xff)
    TransInd   :     .  (x03)     FormatType :     R  (x52)
    SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
    AuditRBA   :        401       AuditPos   : 36183568
    Continued  :     N  (x00)     RecCount   :     1  (x01)

    2012/07/21 05:02:04.056.719 Insert               Len    29 RBA 1634
    Name: SH.MYTAB
    After  Image:                                             Partition 4   G  s
     0000 0005 0000 0001 3200 0100 1000 0000 0c49 4e53 | ……..2……..INS
     4552 5445 4420 726f 77                            | ERTED row
    Column     0 (x0000), Len     5 (x0005)
    Column     1 (x0001), Len    16 (x0010)

     

    The Replicat process which has abended, is now altered to start at a specific RBA and then restarted.

    We use the ALTER REPLICAT testrep EXTRBA 1634 command to reposition the replicat process to start reading records from a specific position in the trail file.

    We now see that the replicat has started running and has processed the second insert statement.
    SQL> select * from mytab;

            ID COMMENTS
    ———- ——————–
             1 INSERTED row
             2 INSERTED row

     

    Statistics now show 2 insert operations – note no deletes and updates processed ….

     

    GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 2> stats replicat testrep

    Sending STATS request to REPLICAT TESTREP …

    Start of Statistics at 2012-07-21 06:52:43.

    Replicating from SH.MYTAB to SH.MYTAB:

    *** Total statistics since 2012-07-21 05:19:12 ***
            Total inserts                                      2.00
            Total updates                                      0.00
            Total deletes                                      0.00
            Total discards                                     0.00
            Total operations                                   2.00


    GoldenGate Integrated Capture Mode

    $
    0
    0

    One of the new features in GoldenGate 11g is the Integrated Capture mode.

    In the earlier classic capture mode, the Oracle GoldenGate Extract process captures data changes from the Oracle redo or archive log files on the source system.

    In integrated capture mode, the Oracle GoldenGate Extract process interacts directly with the database log mining server which mines or reads the database redo log files and captures the changes in the form of Logical Change Records (LCR’s) which are from there written to the GoldenGate trail files.

    The basic difference is that in the Integrated Capture mode, the extract process does not directly read the redo log files. That part of the job is done by the logmining server residing in the Oracle database.

    Integrated capture supports more data types as well as compressed data and as it is fully integrated with the database there is no additional setup steps required when we are configuring GoldenGate with things like RAC, ASM and TDE (Transparent Data Encryption)

    In the integrated capture mode there are two deployment options:

    a) Local deployment
    b) Downstream deployment

    Basically it depends on where the log mining server is deployed.

    In the Local deployment, the source database and the log mining server are the same database

    In downstream deployment, the source and log mining databases are different databases. The source database uses redo transport to ship the archived redo log files to the ‘downstream’ database where the log mining server is residing. The log mining server extracts changes in the form of logical change records and these are then processed by GoldenGate and written to the trail files.

    So in the downstream integrated capture mode, we offload any overhead associated with the capture or transformation from the source database to the downstream database which may be used only for GoldenGate processing and not for any production user connections.

    In this example we will look at the setup of integrated capture local deployment and in the next post we will look at a downstream integrated capture model.

    Database setup for Integrated Capture

    We need to keep in mind the point that for full integrated capture support of all Oracle data and storage types, the compatibility setting of the source database must be at least 11.2.0.3.

    Also, we need to apply the database patch 14551959 using opatch. Read the MOS note 1411356.1 for full details

    After applying the patch 14551959 (the database and listener need to down to apply this patch) using opatch, we also need to do some post install steps as mentioned in the README.txt.

    We need to start the database and run the postinstall.sql located in the patch directory.

    This is to be followed by granting certain privileges to the GoldenGate database user account via the package
    DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE as shown below. In this case the database user is ‘ggate’.

    EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE( -
    grantee => 'ggate', -
    privilege_type => 'capture', grant_select_privileges=> true, do_grants => TRUE);

    If the patch is not applied or the privileges not granted, we can expect to see an error like the one shown below:

    2013-01-24 17:30:24 ERROR OGG-02021 This database lacks the required libraries to support integrated capture.

    What’s different to the classic capture setup?

    When we add the extract we have to use the INTEGRATED CAPTURE clause in the ADD EXTRACT command as shown below

    ADD EXTRACT intext INTEGRATED TRANLOG, BEGIN NOW

    In the extract parameter file we have to use TRANLOGOPTIONS INTEGRATEDPARAMS parameter as show below

    TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 200, parallelism 1)

    The max_sga_size is denoted in MB and this memory is taken from the streans_pool_size part of the SGA memory. If the streams_pool_size is greater than 1 GB, max_sga_size defaults to 1 GB, otherwise it is 75% of the streans_pool_size

    To test this I set the max_sga_size to 200 MB and the streans_pool_size was also 200 MB.

    This error was noticed and the extract abended.

    2013-01-24 17:59:42 ERROR OGG-02050 Not enough database memory to honor requested MAX_SGA_SIZE of 200.
    2013-01-24 17:59:42 ERROR OGG-01668 PROCESS ABENDING.

    We had to set the max_sga_size in this case to 150 and then the extract started.

    The parallelism specifies the number of processes supporting the database log mining server. It defaults to 2

    Register the extract

    We use the REGISTER EXTRACT command to register the primary extract group with the Oracle database. The extract process does not directly read the redo log files as in the classic capture mode, but integrates with the datavase log mining server to receive changes in the form of Logical Change Records or LCR’s.

    We do this before adding the extract and must connect to the database first via the DBLOGIN command

    GGSCI> DBLOGIN USER dbuser PASSWORD dbpasswd
    GGSCI> REGISTER EXTRACT ext1 DATABASE

    Example


    In this case we are creating the extract group intext and the extract datapump group intdp. We will be replicating the SH.customers table using the integrated capture mode.

    GGSCI (pdemvrhl061) 1> DBLOGIN USERID ggate, PASSWORD ggate
    Successfully logged into database.
    GGSCI (pdemvrhl061) 2> REGISTER EXTRACT intext DATABASE
    2013-01-24 17:58:28 WARNING OGG-02064 Oracle compatibility version 11.2.0.0.0 has limited datatype support for integrated capture. Version 11.2.0.3 required for full support.
    2013-01-24 17:58:46 INFO OGG-02003 Extract INTEXT successfully registered with database at SCN 1164411.
    GGSCI (pdemvrhl061) 1> ADD EXTRACT intext INTEGRATED TRANLOG, BEGIN NOW
    EXTRACT added.
    GGSCI (pdemvrhl061) 3> ADD EXTTRAIL /u01/app/ggate/dirdat/lt, EXTRACT intext
    EXTTRAIL added.
    GGSCI (pdemvrhl061) 4> ADD EXTRACT intdp EXTTRAILSOURCE /u01/app/ggate/dirdat/lt
    EXTRACT added.
    GGSCI (pdemvrhl061) 5> ADD RMTTRAIL /u01/app/ggate/dirdat/rt, EXTRACT intdp
    RMTTRAIL added.
    GGSCI (pdemvrhl061) 6> EDIT PARAMS intext
    EXTRACT intext
    USERID ggate, PASSWORD ggate
    TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 100)
    EXTTRAIL /u01/app/ggate/dirdat/lt
    TABLE sh.customers;
    GGSCI (pdemvrhl061) 7> EDIT PARAMS intdp
    EXTRACT intdp
    USERID ggate, PASSWORD ggate
    RMTHOST 10.xx.206.xx, MGRPORT 7809
    RMTTRAIL /u01/app/ggate/dirdat/rt
    TABLE sh.customers ;
    GGSCI (pdemvrhl061) 7> start extract intext
    Sending START request to MANAGER ...
    EXTRACT INTEXT starting
    GGSCI (pdemvrhl061) 8> info all
    Program Status Group Lag at Chkpt Time Since Chkpt
    MANAGER RUNNING
    EXTRACT RUNNING INTDP 00:00:00 00:00:05
    EXTRACT RUNNING INTEXT 01:17:18 00:00:04
    On the target site, start the Replicat process.
    GGSCI (pdemvrhl062) 4> START REPLICAT rep1
    Sending START request to MANAGER ...
    REPLICAT REP1 starting
    GGSCI (pdemvrhl062) 5> info all
    Program Status Group Lag at Chkpt Time Since Chkpt
    MANAGER RUNNING
    REPLICAT RUNNING REP1 00:00:00 00:00:06

    In the background ….

    When we register the extract, we will see that a capture process called OGG$CAP_INTEXT was created and a queue called OGG$Q_INTEXT was created in the GGATE schema.

    A good source of information is also the database alert log and we can see messages like the ones shown below:

    LOGMINER: session#=1 (OGG$CAP_INTEXT), reader MS00 pid=41 OS id=32201 sid=153 started
    Thu Jan 24 18:04:15 2013
    LOGMINER: session#=1 (OGG$CAP_INTEXT), builder MS01 pid=42 OS id=32203 sid=30 started
    Thu Jan 24 18:04:15 2013
    LOGMINER: session#=1 (OGG$CAP_INTEXT), preparer MS02 pid=43 OS id=32205 sid=155 started
    Thu Jan 24 18:04:16 2013

    LOGMINER: Begin mining logfile for session 1 thread 1 sequence 12, /u01/oradata/testdb1/redo03.log
    LOGMINER: End mining logfile for session 1 thread 1 sequence 12, /u01/oradata/testdb1/redo03.log

    Read further

    GoldenGate Integrated Capture Healthcheck Script [Article ID 1448324.1]

    Advisor Webcast : Extracting Data in Oracle GoldenGate Integrated Capture Mode (MOS note 740966.1)

    GoldenGate Integrated Capture using downstream mining database

    $
    0
    0

    In my earlier post, we had discussed the GoldenGate 11g Integrated Capture feature using the local deployment model.

     Let us now look at the Downstream Capture deployment model of the Integrated Capture mode.

     It should be noted that the main difference in the Integrated Capture mode and the Classic Capture mode is that the Extract process no longer reads the online (or archive) redo log files of the Oracle database, but this task is performed by the database logmining server which reads the changes in the form of Logical Change Records (LCR’s) and these are then accessed by the Extract process which writes to the GoldenGate trail files.

     Where the logmining server resides is the difference in the local and downstream deployment model of Integrated Capture.

     In the local deployment, the source database and the mining database are the same.

     In downstream deployment, the source database and mining database are different databases and the logmining server resides in the downstream database. We configure redo transport (similar to what we do in Data Guard) and logs are shipped over the network from the source database to the downstream database.  The logmining server in the downstream database which extract changes from the redo log (or archive) files in the form of logical change records which are then passed onto the GoldenGate extract process.

     Since the logmining activity imposes additional overhead on the database where it is running because it adds additional processes as well as consumes memory from the SGA, it is beneficial to offload this processing from the source database to the downstream database.

     We can configure the downstream database to be the same database as your target database or we could have an additional downstream database in addition to the target database.

     However, do keep in mind that the Oracle database version and platform of the source and target database need to be the same in the downstream deployment model of Integrated Capture.

    Setup and Configuration

     Source Database

    •  Create the source database user account whose credentials Extract will use to fetch data and metadata from the source database. This user can be the same user we created when we setup and configured GoldenGate.
    •  Grant the appropriate privileges for Extract to operate in integrated capture mode via the dbms_goldengate_auth.grant_admin_privilege procedure (11.2.0.3 and above)
    •  Grant  select on v$database to that same user
    •  Configure Oracle Net so that the source database can communicate with the downstream database (like Data Guard)
    •  Create the password file and copy the password file to the $ORACLE_HOME/dbs location on the server hosting the downstream database. Note that the password file must be the same at all source databases, and at the mining database.
    •  Configure one LOG_ARCHIVE_DEST_n initialization parameter to transmit redo data to the downstream mining database.
    •  At the source database (as well as the downstream mining database), set the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database

     

    Downstream Database

    •  Create the database user account on the downstream database. The Extract process will use these credentials to interact with the downstream logmining server. We can use the same user which we had created when we setup and configured GoldenGate on the target database (if the target database and downstream database are the same).
    •  Grant the appropriate privileges for the downstream mining user to operate in ntegrated capture mode by executing the dbms_goldengate_auth.grant_admin_privilege procedure.
    •  Grant SELECT on v$database to the same downstream mining user
    •  Downstream database must be running in ARCHIVELOG mode and we should configure archival of local redo log files if we want to run Extract in real-time integrated capture mode. Use the LOG_ARCHIVE_DEST_n parameter as shown in the example.
    •  Create Standby redo log files (same size as online redo log files and number of groups should be one greater than existing online redo log groups)
    •  Configure the database to archive standby redo log files locally that receive redo data from the online redo logs of the source database. Use the LOG_ARCHIVE_DEST_n parameter as shown in the example.

    Some new GoldenGate Parameters related to Downstream Integrated Capture

    MININGDBLOGIN – Before registering the extract we have to connect to the downstream logmining database with the appropriate database login credentials

    TRANLOGOPTIONS MININGUSER ggate@testdb2 MININGPASSWORD ggate – specify this in the downstream extract parameter file

    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y) – specify this in the downstream extract parameter file and required for real time capture

    Example

    This example illustrates real-time Integrated Capture so we have to configure standby log files as well.

     The source database is testdb1 and the downstream/target database is testdb2

     The database user account is GGATE in both the source as well  as downstream/target database

     We have setup and tested Oracle Net connectivity from source to downstream/target database. In this case we have setup TNS aliases testdb1 and testdb2 in the tnsnames.ora file on both servers

     Source Database (testdb1)

    Grant privileges

    SQL>  EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'ggate', privilege_type => 'capture',  grant_select_privileges=> true, -
          do_grants => TRUE);
    
    PL/SQL procedure successfully completed.
    
    SQL> GRANT SELECT ON V_$DATABASE TO GGATE;
    
    Grant succeeded.

    Configure Redo Transport

     SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=testdb2 ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testdb2';
    
    System altered.
    
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    
    System altered.
    
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb1,testdb2)';
    
    System altered.

    Downstream Database

     Grant Privileges

    SQL>  EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'ggate', privilege_type => 'capture',  grant_select_privileges=> true, -
          do_grants => TRUE);
    
    PL/SQL procedure successfully completed.
    
    SQL> GRANT SELECT ON V_$DATABASE TO GGATE;
    
    Grant succeeded.

    Prepare the mining database to archive its local redo

     SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/u01/oradata/testdb2/arch_local VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)';
    
    System altered.

    Create Standby log files

     SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/oradata/testdb2/standby_redo04.log' SIZE 50M;
    
    Database altered.
    
    SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/oradata/testdb2/standby_redo5.log' SIZE 50M;
    
    Database altered.
    
    SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/oradata/testdb2/standby_redo06.log'  SIZE 50M;
    
    Database altered.
    
    SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 '/u01/oradata/testdb2/standby_redo07.log' SIZE 50M;
    
    Database altered.

    Prepare the mining database to archive redo received in standby redo logs from the source database

     SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’LOCATION=/u01/oradata/testdb2/arch_remote VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)’;
    
     System altered.
    
     SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    
     System altered.

     Set DG_CONFIG at the downstream mining database

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb1,testdb2)';
    
     System altered.

    Setup Integrated Capture Extract Process (myext)

     [oracle@pdemvrhl062 ggate]$ ./ggsci
    
    Oracle GoldenGate Command Interpreter for Oracle
    Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO
    Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21
    
    Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.
    
    GGSCI (pdemvrhl062) 1> DBLOGIN USERID ggate@testdb1 PASSWORD ggate
    Successfully logged into database.
    
    GGSCI (pdemvrhl062) 2> MININGDBLOGIN USERID ggate, PASSWORD ggate
    Successfully logged into mining database.
    
    GGSCI (pdemvrhl062) 5> REGISTER EXTRACT myext DATABASE
    
    2013-01-31 18:02:02  WARNING OGG-02064  Oracle compatibility version 11.2.0.0.0 has limited datatype support for integrated capture. Version 11.2.0.3 required for full support.
    
    2013-01-31 18:03:12  INFO    OGG-02003  Extract MYEXT successfully registered with database at SCN 2129145.
    
    GGSCI (pdemvrhl062) 6> ADD EXTRACT myext INTEGRATED TRANLOG BEGIN NOW
    EXTRACT added.
    
    GGSCI (pdemvrhl062) 7> ADD EXTTRAIL /u01/app/ggate/dirdat/ic , EXTRACT myext
    EXTTRAIL added.
    
    GGSCI (pdemvrhl062) 8> EDIT PARAMS myext
    
    EXTRACT myext
    USERID ggate@testdb1, PASSWORD ggate
    TRANLOGOPTIONS MININGUSER ggate@testdb2 MININGPASSWORD ggate
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
    EXTTRAIL /u01/app/ggate/dirdat/ic
    TABLE sh.customers;

    Create the Replicat process (myrep)

    GGSCI (pdemvrhl062) 14> ADD REPLICAT myrep EXTTRAIL /u01/app/ggate/dirdat/ic
    REPLICAT added.
    
    GGSCI (pdemvrhl062) 17> EDIT PARAMS myrep
    
    REPLICAT myrep
    ASSUMETARGETDEFS
    USERID ggate, PASSWORD ggate
    MAP sh.customers, TARGET sh.customers;

    Start the Extract and Replicat processes

    GGSCI (pdemvrhl062) 19> info all
    
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    
    MANAGER     RUNNING
    EXTRACT     RUNNING     MYEXT       00:00:00      00:00:03
    REPLICAT    RUNNING     MYREP       00:00:00      00:00:03

    Test – On source database update rows of the CUSTOMERS table

     

    SQL> update customers set cust_city='SYDNEY';
    
    55500 rows updated.
    
    SQL> commit;
    
    Commit complete.

    On target database confirm the update statement has been replicated

    [oracle@pdemvrhl062 ggate]$ sqlplus sh/sh
    
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 31 18:39:41 2013
    
    Copyright (c) 1982, 2011, Oracle. All rights reserved.
    
    Connected to:
    
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> select distinct cust_city from customers;
    
    CUST_CITY
    
    ------------------------------
    
    SYDNEY

    Check the statistics of the downstream Extract myext

    GGSCI (pdemvrhl062) 23> stats extract myext
    
    Sending STATS request to EXTRACT MYEXT ...
    
    Start of Statistics at 2013-01-31 18:37:54.
    
    Output to /u01/app/ggate/dirdat/ic:
    
    Extracting from SH.CUSTOMERS to SH.CUSTOMERS:
    
    *** Total statistics since 2013-01-31 18:37:07 ***
            Total inserts                                      0.00
            Total updates                                  55500.00
            Total deletes                                      0.00
            Total discards                                     0.00
            Total operations                               55500.00

    Deploying the Oracle GoldenGate Plug-in on OEM 12c Cloud Control

    $
    0
    0

    This note describes the procedure of implementing the GoldenGate plug-in for Oracle Cloud Control 12c.

    After deploying the GoldenGate Plug-in we can see a new Target Type “GoldenGate” appearing in the OEM 12c Targets menu and can now monitor the status and progress of the Extract, Manager and Replicat processes running in a particular GoldenGate environment.

    Download the note: Deploying the Oracle GoldenGate Plug-in on OEM 12c Cloud Control

     

    gg33

    GoldenGate Active-Active Replication with Conflict Detection and Resolution (CDR) – Part 1

    $
    0
    0

    Let us look at a simple example to illustrate GoldenGate’s Active-Active Replication with an introduction to Conflict Detection and Resolution.

    Let’s call the two sites we are going to use for Active-Active replication as Site A and Site B.

    On Site A we will have the following groups created

    • Extract – ext1
    • Data Pump – dpump1
    • Replicat – rep1

    On Site B we will have the following groups created

    • Extract – ext2
    • Data Pump – dpump2
    • Replicat – rep2

    On Site A we have the following trails set up

    • aa – local extract trail which will be written to by ext1
    • ab – remote trail which will be processed by data pump extract group dpump1. This will be shipped over the network to Site B

    On Site B we have the following trails set up

    • ac – local extract trail which will be written to by ext2
    • ad – remote trail which will be processed by data pump extract group dpump2. This will be shipped over the network to Site A

     

    Database setup

    Create the following objects on both databases (Site A and Site B)

    SQL> create table inventory
    (prod_id number,
    prod_category varchar2(20),
    qty_in_stock number,
    last_dml timestamp default systimestamp); 2 3 4 5

    Table created.

    SQL> alter table inventory add constraint pk_inventory primary key (prod_id) ;

    Table altered.

    SQL> grant all on inventory to ggate;

    Grant succeeded.

    CREATE OR REPLACE TRIGGER INVENTORY_CDR_TRG
    BEFORE UPDATE
    ON SH.INVENTORY
    REFERENCING NEW AS New OLD AS Old
    FOR EACH ROW
    BEGIN
    IF SYS_CONTEXT (‘USERENV’, ‘SESSION_USER’) != ‘GGATE’
    THEN
    :NEW.LAST_DML := SYSTIMESTAMP;
    END IF;
    END;
    /SQL> 2 3 4 5 6 7 8 9 10 11 12

    Trigger created.

     

    Create the extract (EXT1) and data pump (DPUMP1) on Site A

    GGSCI (pdemvrhl061) 14> add extract ext1 tranlog begin now
    EXTRACT added.

    GGSCI (pdemvrhl061) 4> add exttrail /u01/app/ggate/dirdat/aa extract ext1
    EXTTRAIL added.

    GGSCI (pdemvrhl061) 16> add extract dpump1 exttrailsource /u01/app/ggate/dirdat/aa
    EXTRACT added.

    GGSCI (pdemvrhl061) 17> add rmttrail /u01/app/ggate/dirdat/ab extract dpump1
    RMTTRAIL added.

    GGSCI (pdemvrhl061) 14> edit params ext1

    EXTRACT ext1
    USERID ggate, PASSWORD ggate
    EXTTRAIL /u01/app/ggate/dirdat/aa
    TRANLOGOPTIONS EXCLUDEUSER ggate
    TABLE sh.inventory,
    GETBEFORECOLS (
    ON UPDATE KEYINCLUDING (prod_category,qty_in_stock, last_dml),
    ON DELETE KEYINCLUDING (prod_category,qty_in_stock, last_dml));

    GGSCI (pdemvrhl061) 15> edit params dpump1

    EXTRACT dpump1
    USERID ggate, PASSWORD ggate
    RMTHOST 10.32.206.62, MGRPORT 7809, TCPBUFSIZE 100000
    RMTTRAIL /u01/app/ggate/dirdat/ab
    PASSTHRU
    TABLE sh.inventory;

     

    On site B add replicat (REP2)

    GGSCI (pdemvrhl062) 37> add replicat rep2 exttrail /u01/app/ggate/dirdat/ab
    REPLICAT added.

    GGSCI (pdemvrhl062) 10> edit params rep2

    REPLICAT rep2
    ASSUMETARGETDEFS
    USERID ggate, PASSWORD ggate
    DISCARDFILE /u01/app/ggate/discard.txt, append,
    MAP sh.inventory, TARGET sh.inventory;

     

    Create the extract (EXT2) and data pump (DPUMP2) on Site B

    GGSCI (pdemvrhl062) 3> add extract ext2 tranlog begin now
    EXTRACT added.

    GGSCI (pdemvrhl062) 4> add exttrail /u01/app/ggate/dirdat/ac extract ext2
    EXTTRAIL added.

    GGSCI (pdemvrhl062) 5> add extract dpump2 exttrailsource /u01/app/ggate/dirdat/ac
    EXTRACT added.

    GGSCI (pdemvrhl062) 6> add rmttrail /u01/app/ggate/dirdat/ad extract dpump2
    RMTTRAIL added.

    GGSCI (pdemvrhl062) 31> edit params ext2

    EXTRACT ext2
    USERID ggate, PASSWORD ggate
    EXTTRAIL /u01/app/ggate/dirdat/ac
    TRANLOGOPTIONS EXCLUDEUSER ggate
    TABLE sh.inventory,
    GETBEFORECOLS (
    ON UPDATE KEYINCLUDING (prod_category,qty_in_stock, last_dml),
    ON DELETE KEYINCLUDING (prod_category,qty_in_stock, last_dml));

    GGSCI (pdemvrhl062) 32> edit params dpump2

    EXTRACT dpump2
    USERID ggate, PASSWORD ggate
    RMTHOST 10.32.206.61, MGRPORT 7809, TCPBUFSIZE 100000
    RMTTRAIL /u01/app/ggate/dirdat/ad
    PASSTHRU
    TABLE sh.inventory;

     

    On site A add replicat (REP1)

    GGSCI (pdemvrhl061) 21> add replicat rep1 exttrail /u01/app/ggate/dirdat/ad
    REPLICAT added.

    GGSCI (pdemvrhl061) 10> edit params rep1

    REPLICAT rep1
    ASSUMETARGETDEFS
    USERID ggate, PASSWORD ggate
    DISCARDFILE /u01/app/ggate/discard.txt, append,
    MAP sh.inventory, TARGET sh.inventory;

     

    On both Site A and Site B, add trandata

    GGSCI (pdemvrhl061) 17> dblogin userid ggate password ggate
    Successfully logged into database.

    GGSCI (pdemvrhl061) 12> add trandata sh.inventory cols (prod_category,qty_in_stock, last_dml)

    Logging of supplemental redo data enabled for table SH.INVENTORY.

    GGSCI (pdemvrhl061) 13> info trandata sh.inventory

    Logging of supplemental redo log data is enabled for table SH.INVENTORY.

    Columns supplementally logged for table SH.INVENTORY: PROD_ID, PROD_CATEGORY, QTY_IN_STOCK, LAST_DML.

    GGSCI (pdemvrhl062) 18> dblogin userid ggate password ggate
    Successfully logged into database.

    GGSCI (pdemvrhl062) 14> add trandata sh.inventory cols (prod_category,qty_in_stock, last_dml)

    Logging of supplemental redo data enabled for table SH.INVENTORY.

    GGSCI (pdemvrhl062) 15> info trandata sh.inventory

    Logging of supplemental redo log data is enabled for table SH.INVENTORY.

    Columns supplementally logged for table SH.INVENTORY: PROD_ID, PROD_CATEGORY, QTY_IN_STOCK, LAST_DML.

     

    Start the Extract and Data Pump process on Site A

    GGSCI (pdemvrhl061) 31> start extract ext1

    Sending START request to MANAGER …
    EXTRACT EXT1 starting

    GGSCI (pdemvrhl061) 23> start extract dpump1

    Sending START request to MANAGER …
    EXTRACT DPUMP1 starting

    GGSCI (pdemvrhl061) 32> info extract ext1

    EXTRACT EXT1 Last Started 2013-03-22 17:12 Status RUNNING
    Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
    Log Read Checkpoint Oracle Redo Logs
    2013-03-22 17:12:14 Seqno 250, RBA 30170624
    SCN 0.6827610 (6827610)

    GGSCI (pdemvrhl061) 34> info all

    Program Status Group Lag at Chkpt Time Since Chkpt

    MANAGER RUNNING
    EXTRACT RUNNING DPUMP1 00:00:00 00:00:07
    EXTRACT RUNNING EXT1 00:00:00 00:00:03

     

    Start the Extract and Data Pump process on Site B

    GGSCI (pdemvrhl062) 22> start extract ext2

    Sending START request to MANAGER …
    EXTRACT EXT2 starting

    GGSCI (pdemvrhl062) 23> start extract dpump2

    Sending START request to MANAGER …
    EXTRACT DPUMP2 starting

    GGSCI (pdemvrhl062) 24> info all

    Program Status Group Lag at Chkpt Time Since Chkpt

    MANAGER RUNNING
    EXTRACT RUNNING DPUMP2 00:00:00 00:26:01
    EXTRACT RUNNING EXT2 00:00:00 00:00:09

     

    On Site A start the Replicat process REP1

    GGSCI (pdemvrhl061) 38> start replicat rep1

    Sending START request to MANAGER …
    REPLICAT REP1 starting

    GGSCI (pdemvrhl061) 39> status replicat rep1
    REPLICAT REP1: RUNNING

     

    On Site B start the Replicat process REP2

    GGSCI (pdemvrhl062) 26> start replicat rep2

    Sending START request to MANAGER …
    REPLICAT REP2 starting

    GGSCI (pdemvrhl062) 27> status replicat rep2
    REPLICAT REP2: RUNNING

     

    INSERT a row from Site A

    SQL> select name from v$database;

    NAME
    ———
    TESTDB1

    SQL> insert into inventory
    2 values
    3 (100,’TV’,100,sysdate);

    1 row created.

    SQL> commit;

    Commit complete.

     

    Check if row is replicated on Site B

    SQL> select name from v$database;

    NAME
    ———
    TESTDB2

    SQL> select * from inventory;

    PROD_ID PROD_CATEGORY QTY_IN_STOCK LAST_DML
    ———- ——————– ———— ———
    100 TV 100 22-MAR-13

     

    From Site B now INSERT another record

    SQL> insert into inventory
    2 values
    3 (101,’DVD’,10,sysdate);

    1 row created.

    SQL> commit;

    Commit complete.

     

    From Site A check if the replication has taken place

    SQL> select * from inventory;

    PROD_ID PROD_CATEGORY QTY_IN_STOCK LAST_DML
    ———- ——————– ———— ———
    100 TV 100 22-MAR-13
    101 DVD 10 22-MAR-13

    GoldenGate Active-Active Replication with Conflict Detection and Resolution (CDR) – Part 3

    $
    0
    0

    In the earlier post we saw a case of GoldenGate Conflict Resolution using the Trusted Site Or Trusted Source method where one site is dedicated as the trusted or master site and in a CDR scenario will always prevail over other sites participating in the Active-Active Replication.

    We saw how an UPDATE statement conflict was detected and resolved with the trail file send from the trusted site (Site A) overwriting the update made on Site B and on Site A, the trail file which was sent from Site B was ignored.

    Let us now look at a CDR situation which is resolved by using the USEMIN clause in the RESOLVECONFLICT parameter.

    The USEMIN keyword means that If the value of the conflict resolution column (in this case LAST_DML) recorded in the trail file is less than the value of the column in the database, then apply the update from the trail file, else ignore the update recorded in the trail file

    This is how the replicat parameter file will look like on both sites. The same extract and data pump parameter files as deesribed in the earlier posts will be used for this example.

    Site A

    GGSCI (pdemvrhl061) 3> view params rep1

    REPLICAT rep1
    ASSUMETARGETDEFS
    USERID ggate, PASSWORD ggate
    DISCARDFILE /u01/app/ggate/discard.txt, append,
    MAP sh.inventory, TARGET sh.inventory,
    COMPARECOLS (ON UPDATE ALL, ON DELETE ALL),
    RESOLVECONFLICT (UPDATEROWEXISTS,
    (DEFAULT, USEMIN (last_dml)));

    Site B

    GGSCI (pdemvrhl062) 2> view params rep2

    REPLICAT rep2
    ASSUMETARGETDEFS
    USERID ggate, PASSWORD ggate
    DISCARDFILE /u01/app/ggate/discard.txt, append,
    MAP sh.inventory, TARGET sh.inventory,
    COMPARECOLS (ON UPDATE ALL, ON DELETE ALL),
    RESOLVECONFLICT (UPDATEROWEXISTS,
    (DEFAULT, USEMIN (last_dml)));

    On Site A
    update inventory
    set QTY_IN_STOCK=11 where prod_id=101;

    commit;

    On Site B
    update inventory
    set QTY_IN_STOCK=1 where prod_id=101;

    commit;

    End Result

    select * from inventory;
    
       PROD_ID PROD_CATEGORY        QTY_IN_STOCK LAST_DML
    ---------- -------------------- ------------ --------------------------------------
           102 Baseball                      105 10-APR-13 04.44.01.500114 PM
           101 Football                        1 11-APR-13 03.28.01.388828 PM
    

    Update performed on Site B has prevailed in this case.

    Let us examine the trail files on both sites using logdump utility and see why.

    Trail File on Site A (sent from source Site B)

    Name: SH.INVENTORY
    After  Image:                                             Partition 4   G  e
    0000 0007 0000 0003 3130 3100 0200 0500 0000 0131 | ........101........1 
     0003 001f 0000 3230 3133 2d30 342d 3131 3a31 353a | ......2013-04-11:15: 
     3238 3a30 312e 3338 3838 3238 3030 30             | 28:01.388828000
    
    

    Trail File on Site B (sent from source Site A)

    2013/04/11 15:28:01.959.870 FieldComp            Len    56 RBA 3461
    Name: SH.INVENTORY
    After  Image:                                             Partition 4   G  e
    0000 0007 0000 0003 3130 3100 0200 0600 0000 0231 | ........101........1
    3100 0300 1f00 0032 3031 332d 3034 2d31 313a 3135 | 1......2013-04-11:15
    3a32 383a 3032 2e30 3537 3238 3030 3030           | :28:02.057280000
    
    

    LAST_DML Time in Extract Trail file on Site A is lower (15:28:01.388828000) – so the value for QTY_IN_STOCK which is 1 in that trail file got applied and the trail file on Site B (sent from Site A) which had the value of 11 for QTY_IN_STOCK was not applied.

    If we use the stats command with the reportcdr option, we can that CDR has taken place and an UPDATEROWEXISTS tyoe of conflict was resolved.

    GGSCI (pdemvrhl061) 1> stats replicat rep1 latest reportcdr
    
    Sending STATS request to REPLICAT REP1 ...
    
    Start of Statistics at 2013-04-11 16:07:30.
    
    Replicating from SH.INVENTORY to SH.INVENTORY:
    
    *** Latest statistics since 2013-04-11 15:08:45 ***
            Total inserts                                      0.00
            Total updates                                      2.00
            Total deletes                                      0.00
            Total discards                                     0.00
            Total operations                                   2.00
            Total CDR conflicts                                2.00
            CDR resolutions succeeded                          2.00
            CDR UPDATEROWEXISTS conflicts                      2.00
    
    Viewing all 80 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>