In my earlier post, we had discussed the GoldenGate 11g Integrated Capture feature using the local deployment model.
Let us now look at the Downstream Capture deployment model of the Integrated Capture mode.
It should be noted that the main difference in the Integrated Capture mode and the Classic Capture mode is that the Extract process no longer reads the online (or archive) redo log files of the Oracle database, but this task is performed by the database logmining server which reads the changes in the form of Logical Change Records (LCR’s) and these are then accessed by the Extract process which writes to the GoldenGate trail files.
Where the logmining server resides is the difference in the local and downstream deployment model of Integrated Capture.
In the local deployment, the source database and the mining database are the same.
In downstream deployment, the source database and mining database are different databases and the logmining server resides in the downstream database. We configure redo transport (similar to what we do in Data Guard) and logs are shipped over the network from the source database to the downstream database. The logmining server in the downstream database which extract changes from the redo log (or archive) files in the form of logical change records which are then passed onto the GoldenGate extract process.
Since the logmining activity imposes additional overhead on the database where it is running because it adds additional processes as well as consumes memory from the SGA, it is beneficial to offload this processing from the source database to the downstream database.
We can configure the downstream database to be the same database as your target database or we could have an additional downstream database in addition to the target database.
However, do keep in mind that the Oracle database version and platform of the source and target database need to be the same in the downstream deployment model of Integrated Capture.
Setup and Configuration
Source Database
- Create the source database user account whose credentials Extract will use to fetch data and metadata from the source database. This user can be the same user we created when we setup and configured GoldenGate.
- Grant the appropriate privileges for Extract to operate in integrated capture mode via the dbms_goldengate_auth.grant_admin_privilege procedure (11.2.0.3 and above)
- Grant select on v$database to that same user
- Configure Oracle Net so that the source database can communicate with the downstream database (like Data Guard)
- Create the password file and copy the password file to the $ORACLE_HOME/dbs location on the server hosting the downstream database. Note that the password file must be the same at all source databases, and at the mining database.
- Configure one LOG_ARCHIVE_DEST_n initialization parameter to transmit redo data to the downstream mining database.
- At the source database (as well as the downstream mining database), set the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database
Downstream Database
- Create the database user account on the downstream database. The Extract process will use these credentials to interact with the downstream logmining server. We can use the same user which we had created when we setup and configured GoldenGate on the target database (if the target database and downstream database are the same).
- Grant the appropriate privileges for the downstream mining user to operate in ntegrated capture mode by executing the dbms_goldengate_auth.grant_admin_privilege procedure.
- Grant SELECT on v$database to the same downstream mining user
- Downstream database must be running in ARCHIVELOG mode and we should configure archival of local redo log files if we want to run Extract in real-time integrated capture mode. Use the LOG_ARCHIVE_DEST_n parameter as shown in the example.
- Create Standby redo log files (same size as online redo log files and number of groups should be one greater than existing online redo log groups)
- Configure the database to archive standby redo log files locally that receive redo data from the online redo logs of the source database. Use the LOG_ARCHIVE_DEST_n parameter as shown in the example.
Some new GoldenGate Parameters related to Downstream Integrated Capture
MININGDBLOGIN – Before registering the extract we have to connect to the downstream logmining database with the appropriate database login credentials
TRANLOGOPTIONS MININGUSER ggate@testdb2 MININGPASSWORD ggate – specify this in the downstream extract parameter file
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y) – specify this in the downstream extract parameter file and required for real time capture
Example
This example illustrates real-time Integrated Capture so we have to configure standby log files as well.
The source database is testdb1 and the downstream/target database is testdb2
The database user account is GGATE in both the source as well as downstream/target database
We have setup and tested Oracle Net connectivity from source to downstream/target database. In this case we have setup TNS aliases testdb1 and testdb2 in the tnsnames.ora file on both servers
Source Database (testdb1)
Grant privileges
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'ggate', privilege_type => 'capture', grant_select_privileges=> true, - do_grants => TRUE); PL/SQL procedure successfully completed. SQL> GRANT SELECT ON V_$DATABASE TO GGATE; Grant succeeded.
Configure Redo Transport
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=testdb2 ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testdb2'; System altered. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE; System altered. SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb1,testdb2)'; System altered.
Downstream Database
Grant Privileges
SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'ggate', privilege_type => 'capture', grant_select_privileges=> true, - do_grants => TRUE); PL/SQL procedure successfully completed. SQL> GRANT SELECT ON V_$DATABASE TO GGATE; Grant succeeded.
Prepare the mining database to archive its local redo
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/u01/oradata/testdb2/arch_local VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'; System altered.
Create Standby log files
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/oradata/testdb2/standby_redo04.log' SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/oradata/testdb2/standby_redo5.log' SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/oradata/testdb2/standby_redo06.log' SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 '/u01/oradata/testdb2/standby_redo07.log' SIZE 50M; Database altered.
Prepare the mining database to archive redo received in standby redo logs from the source database
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’LOCATION=/u01/oradata/testdb2/arch_remote VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)’; System altered. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE; System altered.
Set DG_CONFIG at the downstream mining database
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb1,testdb2)'; System altered.
Setup Integrated Capture Extract Process (myext)
[oracle@pdemvrhl062 ggate]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (pdemvrhl062) 1> DBLOGIN USERID ggate@testdb1 PASSWORD ggate Successfully logged into database. GGSCI (pdemvrhl062) 2> MININGDBLOGIN USERID ggate, PASSWORD ggate Successfully logged into mining database. GGSCI (pdemvrhl062) 5> REGISTER EXTRACT myext DATABASE 2013-01-31 18:02:02 WARNING OGG-02064 Oracle compatibility version 11.2.0.0.0 has limited datatype support for integrated capture. Version 11.2.0.3 required for full support. 2013-01-31 18:03:12 INFO OGG-02003 Extract MYEXT successfully registered with database at SCN 2129145. GGSCI (pdemvrhl062) 6> ADD EXTRACT myext INTEGRATED TRANLOG BEGIN NOW EXTRACT added. GGSCI (pdemvrhl062) 7> ADD EXTTRAIL /u01/app/ggate/dirdat/ic , EXTRACT myext EXTTRAIL added. GGSCI (pdemvrhl062) 8> EDIT PARAMS myext EXTRACT myext USERID ggate@testdb1, PASSWORD ggate TRANLOGOPTIONS MININGUSER ggate@testdb2 MININGPASSWORD ggate TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y) EXTTRAIL /u01/app/ggate/dirdat/ic TABLE sh.customers;
Create the Replicat process (myrep)
GGSCI (pdemvrhl062) 14> ADD REPLICAT myrep EXTTRAIL /u01/app/ggate/dirdat/ic REPLICAT added. GGSCI (pdemvrhl062) 17> EDIT PARAMS myrep REPLICAT myrep ASSUMETARGETDEFS USERID ggate, PASSWORD ggate MAP sh.customers, TARGET sh.customers;
Start the Extract and Replicat processes
GGSCI (pdemvrhl062) 19> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING MYEXT 00:00:00 00:00:03 REPLICAT RUNNING MYREP 00:00:00 00:00:03
Test – On source database update rows of the CUSTOMERS table
SQL> update customers set cust_city='SYDNEY'; 55500 rows updated. SQL> commit; Commit complete.
On target database confirm the update statement has been replicated
[oracle@pdemvrhl062 ggate]$ sqlplus sh/sh SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 31 18:39:41 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select distinct cust_city from customers; CUST_CITY ------------------------------ SYDNEY
Check the statistics of the downstream Extract myext
GGSCI (pdemvrhl062) 23> stats extract myext Sending STATS request to EXTRACT MYEXT ... Start of Statistics at 2013-01-31 18:37:54. Output to /u01/app/ggate/dirdat/ic: Extracting from SH.CUSTOMERS to SH.CUSTOMERS: *** Total statistics since 2013-01-31 18:37:07 *** Total inserts 0.00 Total updates 55500.00 Total deletes 0.00 Total discards 0.00 Total operations 55500.00