Quantcast
Channel: GoldenGate – Oracle DBA – Tips and Techniques
Viewing all 80 articles
Browse latest View live

12c GoldenGate New Feature – Coordinated Replicat

$
0
0

In one of the the earlier posts we had discussed the GoldenGate 12c Integrated Apply or Integrated Replicat feature. It enables high volume transactions to be applied in parallel.

But it was only supported for Oracle databases and also required to be version 11.2.0.4 and higher.

The Coordinated Replicat feature is new in GoldenGate 12c where the Replicat is multi-threaded and a single replicat instance, multiple threads read the trail independently and apply transactions in parallel. One coordinator thread spawns and coordinates one or more threads that execute replicated SQL operations in parallel.

The main difference between the Integrated Replicat and the Coordinated Replicat is that while in case of the Integrated Replicat, GoldenGate will add (or remove) additional apply server processes depending on the workload, in case of Coordinated Replicat it is user defined partitioning of the workload so as to apply high volume transactions concurrently and in parallel. This is done via the parameter THREADS and MAXTHREADS which we will discuss in this post using an example.

In earlier versions, scalability was enabled by fanning out the work to multiple replicats when the work could not be handled by a single replicat – but this required us to have multiple extracts, data pumps and replicat groups (and parameter files as well).

For example we had to create three separate replicat groups and use the RANGE parameter:

REP1.PRM
MAP sales.acct, TARGET sales.acct, FILTER (@RANGE (1, 3, ID));

REP2.PRM
MAP sales.acct, TARGET sales.acct, FILTER (@RANGE (2, 3, ID));

REP3.PRM
MAP sales.acct, TARGET sales.acct, FILTER (@RANGE (3, 3, ID))

Now in Goldengate 12c Coordinated replicat or delivery, there is a single replicat parameter file and additional replicat groups are created automatically and a single coordinator process or thread spawns additional threads and assigns individual workloads to each thread. Partitioning of workload is done via the THREADRANGE parameter used in the MAP statement.

For example now we require just one single replicat parameter file:

REP.PRM
MAP sales.acct, TARGET sales.acct, THREADRANGE(1-3, ID));

So if the target database is an Oracle version that does not support integrated Replicat, or if it is a non-Oracle database, we can use a coordinated Replicat feature to more or less achieve the same benefits of Integrated Replicat which is to provide higher throughput of transaction application on the target database by processing workload in parallel.

Let us now look at an example of using a Coordinated Replicat.

As in the case of the previous example using Integrated Replicat, the source database is an Oracle 12c Pluggable Database called SALES and we are replicating to another Oracle 12c Pluggable Database called SALES_DR.

We have created the table MYOBJECTS in both the source and target databases and have already enabled supplemental logging at the schema level.

SQL> create table myobjects as select * from all_objects where 1=2;

Table created.

SQL> alter table myobjects add constraint pk_myobjects primary key (object_id);

Table altered.

SQL> grant all on myobjects to C##GGADMIN;

Grant succeeded.

 

On the source we have created the Extract and Data Pump groups – we are using Integrated Extract in this case.


 
Register the integrated extract
 
GGSCI (orasql-001-dev.mydomain) 6> DBLOGIN USERIDALIAS gg_root

Successfully logged into database CDB$ROOT.

GGSCI (orasql-001-dev.mydomain) 7> REGISTER EXTRACT myext1 DATABASE  CONTAINER (sales)

Extract MYEXT1 successfully registered with database at SCN 3669081.

 
Add the Integrated Extract and Data Pump 
 
GGSCI (orasql-001-dev.mydomain) 8> ADD EXTRACT myext1 INTEGRATED TRANLOG, BEGIN NOW

EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 9> ADD EXTTRAIL ./dirdat/lt EXTRACT myext1

EXTTRAIL added.

GGSCI (orasql-001-dev.mydomain) 10> ADD EXTRACT mydp1 EXTTRAILSOURCE ./dirdat/lt BEGIN NOW

EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 11> ADD RMTTRAIL ./dirdat/rt EXTRACT mydp1

RMTTRAIL added.

 
Edit the Integrated Extract Parameter File
 
GGSCI (orasql-001-dev.mydomain) 11> edit params myext1

EXTRACT myext1

SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_root
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
EXTTRAIL ./dirdat/lt
SOURCECATALOG sales
TABLE sh.myobjects;

 
Edit the Data Pump Parameter File 
 
GGSCI (orasql-001-dev.mydomain) 12> edit params mydp1

EXTRACT mydp1
SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_owner
RMTHOST orasql-001-test, MGRPORT 7809
RMTTRAIL ./dirdat/rt
SOURCECATALOG sales
TABLE sh.myobjects;

 


On the target, add the Coordinated Replicat
 

GGSCI (orasql-001-test.mydomain) 1> DBLOGIN USERIDALIAS gg_sales
Successfully logged into database SALES_DR.

GGSCI (orasql-001-dev.mydomain) 4> add replicat rep1, coordinated, EXTTRAIL ./dirdat/rt, maxthreads 5
REPLICAT (Coordinated) added.

GGSCI (orasql-001-dev.mydomain) 1> view params rep1

REPLICAT rep1
SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_sales
ASSUMETARGETDEFS
MAP sales.sh.myobjects, TARGET sales_dr.sh.myobjects,
THREADRANGE(1-5, OBJECT_ID));

GGSCI (kens-orasql-001-dev.corporateict.domain) 5> start replicat rep1

Sending START request to MANAGER ...
REPLICAT REP1 starting

GGSCI (kens-orasql-001-dev.corporateict.domain) 6> info replicat rep1

REPLICAT   REP1      Last Started 2014-01-23 10:38   Status RUNNING
COORDINATED          Coordinator                      MAXTHREADS 5
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Process ID           25811
Log Read Checkpoint  File ./dirdat/rt000000
                     First Record  RBA 0

GGSCI (kens-orasql-001-dev.corporateict.domain) 8>  info replicat rep1 detail

REPLICAT   REP1      Last Started 2014-01-23 10:38   Status RUNNING
COORDINATED          Coordinator                      MAXTHREADS 5
Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
Process ID           25811
Log Read Checkpoint  File ./dirdat/rt000000
                     First Record  RBA 1642

 
We now populate the source table with some data and check if the extract has captured the change data
 

SQL> insert into myobjects
  2  select * from all_objects;

77694 rows created.

SQL> commit;

Commit complete.

GGSCI (kens-orasql-001-test.corporateict.domain) 1> stats extract ext1 latest

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2014-01-23 10:41:54.

Output to ./dirdat/lt:

Extracting from SALES.SH.MYOBJECTS to SALES_DR.SH.MYOBJECTS:

*** Latest statistics since 2014-01-23 10:41:33 ***
        Total inserts                                  77694.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               77694.00

End of Statistics.

 

We can now see that the Replicat has spawned 5 additional threads (because we had specified MAXTHREADS 5) and we see the addtional replicat groups have been created (REP1001 to REP1005).
 

GGSCI (orasql-001-dev.mydomain) 9> info replicat rep1 detail

REPLICAT   REP1      Last Started 2014-01-23 10:48   Status RUNNING
COORDINATED          Coordinator                      MAXTHREADS 5
Checkpoint Lag       00:03:11 (updated 00:00:00 ago)
Process ID           26831
Log Read Checkpoint  File ./dirdat/rt000000
                     2014-01-23 10:46:29.747584  RBA 28181513

Lowest Log BSN value: 

Active Threads:
ID  Group Name PID   Status   Lag at Chkpt  Time Since Chkpt
1   REP1001    26838 RUNNING  00:00:00      00:00:20
2   REP1002    26839 RUNNING  00:00:00      00:00:20
3   REP1003    26840 RUNNING  00:00:00      00:00:20
4   REP1004    26841 RUNNING  00:00:00      00:00:20
5   REP1005    26842 RUNNING  00:00:00      00:00:20

GGSCI (orasql-001-dev.mydomain) 2> info replicat rep1001

REPLICAT   REP1001   Last Started 2014-01-23 10:48   Status RUNNING
COORDINATED          Replicat Thread                  Thread 1
Checkpoint Lag       00:00:00 (updated 00:00:05 ago)
Process ID           26838
Log Read Checkpoint  File ./dirdat/rt000000
                     2014-01-23 10:49:24.008242  RBA 56361384

 

About 77000 rows were inserted in the target table and we can see that the workload has been distributed by the replicat coordinator process among the 5 threads – so each thread has processed about 15000 rows each.
 

GGSCI (orasql-001-dev.mydomain) 2> stats replicat rep1001

Sending STATS request to REPLICAT REP1001 ...

Start of Statistics at 2014-01-23 10:51:44.

Replicating from SALES.SH.MYOBJECTS to SALES_DR.SH.MYOBJECTS:

*** Total statistics since 2014-01-23 10:49:31 ***
        Total inserts                                  15748.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               15748.00

GGSCI (orasql-001-dev.mydomain) 4>

GGSCI (orasql-001-dev.mydomain) 3> stats replicat rep1005

Sending STATS request to REPLICAT REP1005 ...

Start of Statistics at 2014-01-23 10:52:09.

Replicating from SALES.SH.MYOBJECTS to SALES_DR.SH.MYOBJECTS:

*** Total statistics since 2014-01-23 10:49:31 ***
        Total inserts                                  15640.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               15640.00

GoldenGate real-time replication from Active Standby Database to SQL Server 2012 target

$
0
0

This note describes how to run an Initial Load along with Change Data Capture from a source Oracle 11g R2 Active Standby database (ALO Archived Log Only mode capture) to a MS SQL Server 2012 target database.

The table is a 6.3 million row table – AC_AMOUNT in the IDIT_PRD schema.

Steps

Create the Initial Load Extract

GGSCI (db02) 2> add extract testini1 sourceistable
EXTRACT added.


extract testini1
setenv (ORACLE_SID="DEVSB2")
setenv (ORACLE_HOME="/opt/oracle/product/server/11.2.0.3")
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.corp, MGRPORT 7809,   tcpbufsize 10485760, tcpflushbytes 10485760
rmtfile ./dirdat/rr, maxfiles 999999, megabytes 200, append
TRANLOGOPTIONS ARCHIVEDLOGONLY
TRANLOGOPTIONS ALTARCHIVELOGDEST  /u03/oracle/DEVSB2/arch/
TABLE IDIT_PRD.AC_ACCOUNT;

Notes on using ALO mode:

1) The connection is to the open Active Standby database and not the primary database

2) The TRANLOGOPTIONS ARCHIVEDLOGONLY parameter has to be used to indicate the extract needs to read the archive log files on the standby database host and not the online redo log files on the primary database host

3) If we are using the FRA as a location for the archive redo log files, then we need to ensure that the LOG_ARCHIVE_DEST_1 parameter on the standby database has to be set to a directory other than the FRA because in ALO mode OGG cannot read archive log files from directories based on date formats as is the case in the FRA where for every day, a new directory is created based on the date.

Refer MOS note: ALO OGG Extract Unable to Find Archive Logs Under Date Coded sub Directories (Doc ID 1359776.1)

4) We have not specified the parameter COMPLETEARCHIVEDLOGONLY. This is the default in ALO mode. It forces Extract to wait for the archived log to be written to disk completely before starting to process redo data.

It is recommended NOT to use the NOCOMPLETEARCHIVEDLOGONLY parameter which is the default value for Classic Extract if we are using the ALO mode.

Create the Initial Load Replicat

GGSCI (DCV-RORSQL-N001) 125> add replicat testrep1 exttrail ./dirdat/rr
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 126> edit params testrep1
REPLICAT testrep1
TARGETDB sqlserver2012
SOURCEDEFS ./dirdef/source9.def
BATCHSQL
MAP IDIT_PRD.AC_ACCOUNT, TARGET IDIT_PRD.AC_ACCOUNT;


Create the CDC Extract

GGSCI (db02) 5> add extract cdcext tranlog begin now
EXTRACT added.


GGSCI (db02) 6> add rmttrail ./dirdat/rs extract cdcext
RMTTRAIL added.



GGSCI (db02) 2> edit params cdcext

Extract cdcext
setenv (ORACLE_SID="DEVSB2")
setenv (ORACLE_HOME="/opt/oracle/product/server/11.2.0.3")
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.corp, MGRPORT 7809
RMTTRAIL ./dirdat/rs
TRANLOGOPTIONS ARCHIVEDLOGONLY
TRANLOGOPTIONS ALTARCHIVELOGDEST  /u03/oracle/DEVSB2/arch/
TABLE IDIT_PRD.AC_ACCOUNT;

Note:
Since the CDC extract is reading archive log files from the Active Standby we have to specify the archive log sequence and position to start reading from


GGSCI (db02) 1> alter extract cdcext extseqno 105 extrba 188119040
EXTRACT altered.

Create the CDC Replicat on MS SQL Server 2012 target

GGSCI (DCV-RORSQL-N001) 131> add replicat repcdc exttrail ./dirdat/rs
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 133> edit params repcdc

REPLICAT repcdc
TARGETDB sqlserver2012
SOURCEDEFS ./dirdef/source9.def
MAP IDIT_PRD.AC_ACCOUNT, TARGET IDIT_PRD.AC_ACCOUNT;

Start the CDC Extract before the Initial Load Extract – Do not start the CDC Replicat!

GGSCI (db02) 2> start extract extcdc

Sending START request to MANAGER ...
EXTRACT EXTCDC starting


GGSCI (db02) 3> info extract extcdc

EXTRACT    EXTCDC    Last Started 2014-03-06 12:48   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:05:48 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     First Record
                     SCN 0.0 (0)

Start the Initial Load Extract

GGSCI (db02) 6> start extract testini1

Sending START request to MANAGER ...
EXTRACT TESTINI1 starting


GGSCI (db02) 7> info extract testini1

EXTRACT    TESTINI1  Initialized   2014-03-06 12:25   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE


GGSCI (db02) 12> !
info extract testini1

EXTRACT    TESTINI1  Last Started 2014-03-06 12:50   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table IDIT_PRD.AC_ACCOUNT
                     2014-03-06 12:50:29  Record 304670
Task                 SOURCEISTABLE


While the Initial Load Extract is running we perform a transaction on the Primary Oracle database

SQL> update idit_prd.ac_account
 2  set FREEZE_DATE='01-JAN-2020'
  3  where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

While the Initial Load Extract is running we start the Initial Load Replicat

GGSCI (DCV-RORSQL-N001) 135> start replicat testrep1

Sending START request to MANAGER ('MANAGER') ...
REPLICAT TESTREP1 starting


GGSCI (DCV-RORSQL-N001) 136> info replicat testrep1

REPLICAT   TESTREP1  Last Started 2014-03-06 12:53   Status RUNNING
Checkpoint Lag       00:02:39 (updated 00:00:00 ago)
Process ID           6764
Log Read Checkpoint  File ./dirdat/rr000000
                     2014-03-06 12:50:39.244754  RBA 21561942

We will start the CDC Replicat only after the Initial load has been completed on the target database

At this point in time initial load replicat is still inserting rows on target

GGSCI (DCV-RORSQL-N001) 137> stats replicat testrep1

Sending STATS request to REPLICAT TESTREP1 ...

Start of Statistics at 2014-03-06 12:53:54.

Replicating from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Total statistics since 2014-03-06 12:53:11 ***
        Total inserts                                 276172.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                              276172.00

Now the initial load is completed and we see that 6359427 rows have been extracted

GGSCI (db02) 17> info extract testini1

EXTRACT    TESTINI1  Last Started 2014-03-06 12:50   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table IDIT_PRD.AC_ACCOUNT
                     2014-03-06 12:53:27  Record 6359427
Task                 SOURCEISTABLE

The CDC extract is meanwhile running and we see that it has captured the UPDATE statement we executed

GGSCI (db02) 30> info extract cdcext

EXTRACT    CDCEXT    Last Started 2014-03-06 13:01   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:05 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2014-03-06 12:55:11  Seqno 107, RBA 3587072
                     SCN 2.2325484702 (10915419294)


GGSCI (db02) 31> stats extract cdcext

Sending STATS request to EXTRACT CDCEXT ...

Start of Statistics at 2014-03-06 13:04:54.

Output to ./dirdat/rs:

Extracting from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Total statistics since 2014-03-06 13:01:43 ***
        Total inserts                                      0.00
        Total updates                                  1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               1.00

We will start the CDC Replicat only after the initial load replicat has inserted all the rows into the MS SQL Server 2012 target

GGSCI (DCV-RORSQL-N001) 141> send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 690 seconds.


GGSCI (DCV-RORSQL-N001) 142> !
send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 715 seconds.


GGSCI (DCV-RORSQL-N001) 143> !
send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 759 seconds.

When we see the “At EOF , no more records to process” it means the initial load is now complete

GGSCI (DCV-RORSQL-N001) 146> send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 1,072 seconds.
At EOF, no more records to process.



GGSCI (DCV-RORSQL-N001) 147> stats replicat testrep1 latest

Sending STATS request to REPLICAT TESTREP1 ...

Start of Statistics at 2014-03-06 13:10:26.

Replicating from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Latest statistics since 2014-03-06 12:53:11 ***
        Total inserts                                6359427.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                             6359427.00


We now can start the CDC Replicat on target

GGSCI (DCV-RORSQL-N001) 182> start replicat repcdc

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPCDC starting


GGSCI (DCV-RORSQL-N001) 183> info replicat repcdc

REPLICAT   REPCDC    Last Started 2014-03-06 13:49   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           6472
Log Read Checkpoint  File ./dirdat/rs000001
                     2014-03-06 13:49:28.714735  RBA 4111468

We can see that it has applied the one single UPDATE statement on the target SQL Server database

GGSCI (DCV-RORSQL-N001) 184> stats replicat repcdc

Sending STATS request to REPLICAT REPCDC ...

Start of Statistics at 2014-03-06 13:49:35.

Replicating from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Total statistics since 2014-03-06 13:49:21 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

Verify the UPDATE statement in the SQL Server database– note the value for the FREEZE_DATE column

GoldenGate Initial Load Methods Oracle source to SQL Server 2012 target

$
0
0

In this post we will look at three different methods of performing an initial data load from an Oracle 11g source database running on an HP-UX IA64 platform to a SQL Server 2012 target database hosted on Windows 2012 Datacenter.

The three methods we are using here are:

1) Oracle GoldenGate Direct Load over network without trail files
2) Oracle GoldenGate File to Replicat method
3) Oracle GoldenGate File with SQL Server BULK INSERT

These are some of the results obtained in our testing:

Initial load extract:

Between 2 and 3 million rows per minute

PRD.SH_BATCH_LOG table with 8001500 rows extracted in 4:30 minutes

PRD.AC_TRANSACTION_RACI table with 74104323 rows extracted in 21 minutes

Initial load replicat:

Between 1 to 1.5 million rows every 2 minutes

With single replicat process, table PRD.SH_BATCH_LOG with 7895001 rows took 15 minutes.

With 3 parallel replicat processes, the same 7.8 million row table was loaded in under 5 minutes. Each replicat processed about 2.6 million rows each.

3 parallel replicat processes pushed CPU utilization to around 60-70% mark but not higher.

Using 5 parallel replicat processes we were able to load a 177 million row table in little over 3 hours

The best performance obtained was using SQL Server BULK INSERT, where we were able to load 8 million rows in around 2 minutes.

 

1) Oracle GoldenGate Direct Load over network without trail files

Note – in the Direct Load method, no trail files are created – but this is not very efficient method for a large table.

GGSCI (db02) 2> edit params defgen

DEFSFILE ./dirdat/source.def,
USERID GGATE_OWNER@REDEVDB2, PASSWORD ggate
TABLE PRD.T_PRODUCT_LINE;


oracle@db02:/u01/oracle/goldengate > ./defgen paramfile /u01/oracle/goldengate/dirprm/defgen.prm

Copy source.def to ./dirdef directory on Windows 2012 server

Create the Initial Load Extract

GGSCI (db02) 5> add extract extinit1 sourceistable
EXTRACT added.


extract extinit1
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.local, MGRPORT 7809
RMTTASK REPLICAT, GROUP rinit1
TABLE PRD.T_PRODUCT_LINE;

Next create the table in the SQL Server database

Create the initial load replicat on the target SQL Server GoldenGate environment

GGSCI (DCV-RORSQL-N001) 28> add replicat repinit1 specialrun
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 29> edit params rinit1

replicat rinit1
TARGETDB sqlserver2012
SOURCEDEFS ./dirdef/source.def
MAP PRD.T_PRODUCT_LINE, TARGET PRD.T_PRODUCT_LINE;

Start the initial load extract

GGSCI (db02) 11> start extract extinit1

Sending START request to MANAGER ...
EXTRACT EXTINIT1 starting


GGSCI (db02) 12> info extract extinit1

EXTRACT    EXTINIT1  Initialized   2014-02-25 10:26   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE


GGSCI (db02) 13> info extract extinit1

EXTRACT    EXTINIT1  Last Started 2014-02-25 10:59   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.T_PRODUCT_LINE
                     2014-02-25 10:59:03  Record 1
Task                 SOURCEISTABLE


GGSCI (db02) 14> info extract extinit1

EXTRACT    EXTINIT1  Last Started 2014-02-25 10:59   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.T_PRODUCT_LINE
                     2014-02-25 10:59:06  Record 801
Task                 SOURCEISTABLE

When the Extract shows the status STOPPED, we can check the target PRD.T_PRODUCT_LINE via the SQL Server 2012 Management Studio and we find that 801 rows have been inserted in the table.

Note – in this method we do not need to start the replicat process on the target.

 

2) Oracle GoldenGate File to Replicat method

In this method we will create 3 replicat processes which will be running in parallel processing the trail files which are generated by the extract process.

The table has 74 million rows.

Create initial load extract

GGSCI (db02) 15> add extract extinit2 sourceistable
EXTRACT added.

GGSCI (db02) 3> edit params extinit2

extract extinit2
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.local, MGRPORT 7809,  tcpbufsize 10485760, tcpflushbytes 10485760
rmtfile ./dirdat/te, maxfiles 999999, megabytes 400, purge
reportcount every 300 seconds, rate
TABLE PRD.AC_TRANSACTION_RACI;

As in the first method, we create the definitions file using DEFGEN and then copy the generated file to the dirdef directory on the target SQL Server GoldenGate software home.


oracle@db02:/u01/oracle/goldengate > ./defgen paramfile /u01/oracle/goldengate/dirprm/defgen.prm

Create three parallel replicat groups

GGSCI (DCV-RORSQL-N001) 39> add replicat repinit2  exttrail ./dirdat/te
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 40> add replicat repinit3  exttrail ./dirdat/te
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 41> add replicat repinit4  exttrail ./dirdat/te
REPLICAT added.



GGSCI (DCV-RORSQL-N001) 42> edit params repinit2


GGSCI (DCV-RORSQL-N001) 43> edit params repinit3


GGSCI (DCV-RORSQL-N001) 44> edit params repinit4


GGSCI (DCV-RORSQL-N001) 45> view params repinit2
replicat repinit2
targetdb sqlserver2012
SOURCEDEFS ./dirdef/source.def
reportcount every 60 seconds, rate
overridedups
end runtime
MAP PRD.AC_TRANSACTION_RACI, TARGET PRD.AC_TRANSACTION_RACI , filter (
@RANGE (1,3));


GGSCI (DCV-RORSQL-N001) 46> view params repinit3
replicat repinit3
targetdb sqlserver2012
SOURCEDEFS ./dirdef/source.def
reportcount every 60 seconds, rate
overridedups
end runtime
MAP PRD.AC_TRANSACTION_RACI, TARGET PRD.AC_TRANSACTION_RACI , filter (
@RANGE (2,3));


GGSCI (DCV-RORSQL-N001) 47> view params repinit4
replicat repinit4
targetdb sqlserver2012
SOURCEDEFS ./dirdef/source.def
reportcount every 60 seconds, rate
overridedups
end runtime
MAP PRD.AC_TRANSACTION_RACI, TARGET PRD.AC_TRANSACTION_RACI , filter (
@RANGE (3,3));

Start the initial load extract



GGSCI (db02) 4> start extract extinit2

Sending START request to MANAGER ...
EXTRACT EXTINIT2 starting

GGSCI (db02) 5> info extract extinit2

EXTRACT    EXTINIT2  Initialized   2014-02-25 11:27   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE


GGSCI (db02) 6> !
info extract extinit2

EXTRACT    EXTINIT2  Last Started 2014-02-25 11:56   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.AC_TRANSACTION_RACI
                     2014-02-25 11:56:51  Record 1
Task                 SOURCEISTABLE


GGSCI (db02) 9> !
info extract extinit2

EXTRACT    EXTINIT2  Last Started 2014-02-25 11:56   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.AC_TRANSACTION_RACI
                     2014-02-25 11:57:31  Record 2312001
Task                 SOURCEISTABLE

While the initial load extract is running start the three parallel replicat processe


GGSCI (DCV-RORSQL-N001) 48> start replicat repinit2

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPINIT2 starting


GGSCI (DCV-RORSQL-N001) 49> start replicat repinit3

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPINIT3 starting


GGSCI (DCV-RORSQL-N001) 50> start replicat repinit4

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPINIT4 starting


GGSCI (DCV-RORSQL-N001) 51> info replicat repinit2

REPLICAT   REPINIT2  Last Started 2014-02-25 11:59   Status RUNNING
Checkpoint Lag       00:02:45 (updated 00:00:02 ago)
Process ID           5792
Log Read Checkpoint  File ./dirdat/te000000
                     2014-02-25 11:57:10.276246  RBA 4893789

While the 3 replicat processes are running, we can see that they are each processing almost the same number of rows and the initial load task has been distributed between the 3 parallel replicat processes


GGSCI (DCV-RORSQL-N001) 54> stats replicat repinit2 latest

Sending STATS request to REPLICAT REPINIT2 ...

Start of Statistics at 2014-02-25 12:02:46.

Replicating from PRD.AC_TRANSACTION_RACI to PRD.AC_TRANSACTION_RACI:

*** Latest statistics since 2014-02-25 11:59:45 ***
        Total inserts                                 100663.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                              100663.00

End of Statistics.


GGSCI (DCV-RORSQL-N001) 55> stats replicat repinit3 latest

Sending STATS request to REPLICAT REPINIT3 ...

Start of Statistics at 2014-02-25 12:02:56.

Replicating from PRD.AC_TRANSACTION_RACI to PRD.AC_TRANSACTION_RACI:

*** Latest statistics since 2014-02-25 11:59:45 ***
        Total inserts                                 100071.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                              100071.00

End of Statistics.


GGSCI (DCV-RORSQL-N001) 56> stats replicat repinit4 latest

Sending STATS request to REPLICAT REPINIT4 ...

Start of Statistics at 2014-02-25 12:03:02.

Replicating from PRD.AC_TRANSACTION_RACI to PRD.AC_TRANSACTION_RACI:

*** Latest statistics since 2014-02-25 11:59:47 ***
        Total inserts                                  98042.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               98042.00

End of Statistics.

We now see that the initial load extract has stopped and it has extracted 74 million rows


GGSCI (db02) 14> !
info extract extinit2

EXTRACT    EXTINIT2  Last Started 2014-02-25 11:56   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.AC_TRANSACTION_RACI
                     2014-02-25 12:17:53  Record 74104323
Task                 SOURCEISTABLE

 

3) Oracle GoldenGate File with SQL Server BULK INSERT

In this method we use the SQL Server 2012 BULK INSERT to process the text file which is generated by the GoldenGate extract process.

Create the initial load extract

Note the parameter used in the extract file – FORMATASCII, BCP

This parameter instructs Oracle GoldenGate to write the output to a text file which is compatible with the SQL Server BCP utility.


GGSCI (db02) 19> add extract extbcp sourceistable
EXTRACT added.

GGSCI (db02) 20> edit params extbcp

"/u01/oracle/goldengate/dirprm/extbcp.prm" 6 lines, 181 characters
extract extbcp
userid ggate_owner, password ggate
FORMATASCII, BCP
RMTHOST DCV-RORSQL-N001.local, MGRPORT 7809
rmtfile ./dirdat/myobjects.dat PURGE
TABLE GGATE_OWNER.MYOBJECTS;

Start the initial load extract

GGSCI (db02) 1> start extract extbcp

Sending START request to MANAGER ...
EXTRACT EXTBCP starting


GGSCI (db02) 2> info extract extbcp

EXTRACT    EXTBCP    Last Started 2014-02-25 12:37   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table GGATE_OWNER.MYOBJECTS
                     2014-02-25 12:37:05  Record 1
Task                 SOURCEISTABLE


GGSCI (db02) 3> !
info extract extbcp

EXTRACT    EXTBCP    Last Started 2014-02-25 12:37   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table GGATE_OWNER.MYOBJECTS
                     2014-02-25 12:37:06  Record 78165
Task                 SOURCEISTABLE

Create the initial load replicat

GGSCI (DCV-RORSQL-N001) 62> edit params repbcp


GGSCI (DCV-RORSQL-N001) 63> view params repbcp
targetdb sqlserver2012
GENLOADFILES  bcpfmt.tpl
SOURCEDEFS ./dirdef/source.def
extfile ./dirdat/myobjects.dat
assumetargetdefs
MAP GGATE_OWNER.MYOBJECTS, TARGET GGATE_OWNER.MYOBJECTS;

Start the replicat from the command line


D:\app\product\GoldenGate>replicat paramfile ./dirprm/repbcp.prm reportfile ./dirrpt/repbcp.rpt

***********************************************************************
               Oracle GoldenGate Delivery for SQL Server
Version 12.1.2.0.1 17597485 OGGCORE_12.1.2.0.T2_PLATFORMS_131206.0309
Windows x64 (optimized), Microsoft SQL Server on Dec  6 2013 12:44:54

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2014-02-25 12:41:14
***********************************************************************

Operating System Version:
Microsoft Windows , on x64
Version 6.2 (Build 9200: )

Process id: 1840

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2014-02-25 12:41:14  INFO    OGG-03059  Operating system character set identified as windows-1252.

2014-02-25 12:41:14  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.

2014-02-25 12:41:15  INFO    OGG-01552  Connection String: provider=SQLNCLI11;initial catalog=PRD;data source=DCV-RORSQL-N001;persist security info=false;integrated security=sspi.

2014-02-25 12:41:15  INFO    OGG-03036  Database character set identified as windows-1252. Locale: en_US.

2014-02-25 12:41:15  INFO    OGG-03037  Session character set identified as windows-1252.

2014-02-25 12:41:15  INFO    OGG-03528  The source database character set, as dtermined from the table definition file, is UTF-8.
Using following columns in default map by name:
  object_id, object_name, object_type

File created for BCP initiation: MYOBJECTS.bat
File created for BCP format:     MYOBJECTS.fmt

Load files generated successfully.

In SQL Server 2012 Management Studio load the data into SQL Server table via the BULK INSERT command.

  bulk insert [PRD].[GGATE_OWNER].[MYOBJECTS] from 'D:\app\product\GoldenGate\dirdat\myobjects.dat'
  with(
  DATAFILETYPE = 'char',
   FIELDTERMINATOR = '\t',
ROWTERMINATOR = '0x0a'
   );


(78165 row(s) affected)

GoldenGate change data capture and replication of BLOB and CLOB data

$
0
0

We will look at an example of GoldenGate replication of a table having a BLOB column and how an INSERT and UPDATE statement on the table with BLOB data is handled by GoldenGate.

We create an APEX 4.2 application to illustrate this example where we create a form and report based on the DOCUMENTS table and upload and download documents. We will observe how changes to the BLOB column data are replicated in real-time to the target database via GoldenGate change data capture.

Thanks to ACE Director Eddie Awad’s article which made me understand how APEX handles file upload and downloads.
Read the article by Eddie.

On the source database we create the DOCUMENTS table and a sequence and trigger to populate the primary key column ID.

CREATE TABLE documents
(
   ID              NUMBER PRIMARY KEY
  ,DOC_CONTENT    BLOB
  ,MIME_TYPE       VARCHAR2 (255)
  ,FILENAME        VARCHAR2 (255)
  ,LAST_UPDATED    DATE
  ,CHARACTER_SET   VARCHAR2 (128)
);

CREATE SEQUENCE documents_seq;

CREATE OR REPLACE TRIGGER documents_trg_bi
   BEFORE INSERT
   ON documents
   FOR EACH ROW
BEGIN
   :new.id := documents_seq.NEXTVAL;
END;
/

Create the Extract and Replicat processes

GGSCI (vindi-a) 3> add extract ext9 tranlog begin now
EXTRACT added.

GGSCI (vindi-a) 4> add rmttrail /u01/app/oracle/product/st_goldengate/dirdat/xx extract ext9
RMTTRAIL added.

GGSCI (vindi-a) 5> edit params ext9
extract ext9
CACHEMGR CACHESIZE 8G
userid gg_owner@testdb password gg_owner
DDL include ALL
ddloptions  addtrandata, report
rmthost poc-strelis-vindi, mgrport 7810
rmttrail  /u01/app/oracle/product/st_goldengate/dirdat/xx
dynamicresolution
SEQUENCE GG_OWNER.*;
TABLE GG_OWNER.DOCUMENTS;

GGSCI (vindi-a) 1> start extract ext9

Sending START request to MANAGER ...
EXTRACT EXT9 starting

GGSCI (vindi-a) 2> info extract ext9

EXTRACT    EXT9      Last Started 2014-05-22 08:43   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:04:54 ago)
Process ID           17984
Log Read Checkpoint  Oracle Redo Logs
                     2014-05-22 08:38:44  Seqno 122, RBA 9039376
                     SCN 0.0 (0)

On Target GoldenGate 

GGSCI (vindi-a) 3> add replicat rep9 exttrail /u01/app/oracle/product/st_goldengate/dirdat/xx
REPLICAT added.

GGSCI (vindi-a) 4> edit params rep9
replicat rep9
assumetargetdefs
ddlerror default ignore
userid gg_owner@strelis password gg_owner
MAP GG_OWNER.DOCUMENTS ,TARGET GG_OWNER.DOCUMENTS;

GGSCI (vindi-a) 5> start replicat rep9

Sending START request to MANAGER ...
REPLICAT REP9 starting

GGSCI (vindi-a) 6> info replicat rep9

REPLICAT   REP9      Last Started 2014-05-22 08:43   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Process ID           17919
Log Read Checkpoint  File /u01/app/oracle/product/st_goldengate/dirdat/xx000000
                     First Record  RBA 0

Since we have configured DDL replication as well we see that the DOCUMENTS table has been created on the target database as well.

oracle@vind-a:/export/home/oracle $ sqlplus gg_owner/gg_owner@targetdb

SQL*Plus: Release 11.2.0.3.0 Production on Thu May 22 08:35:55 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc documents
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 DOC_CONTENT                                        BLOB
 MIME_TYPE                                          VARCHAR2(255)
 FILENAME                                           VARCHAR2(255)
 LAST_UPDATED                                       DATE
 CHARACTER_SET                                      VARCHAR2(128)

We now launch APEX to create our demo application.

a1

In Application Builder click on Create

 

a2
 
Select Database
 

 

 
Click Add Page
 

 
Click Next
 

 
Accept default value
 

 
Accept default values
 

 

 
Click Create Application
 

 
Click Page 1 link
 

 
From Regions menu click Create
 

 
Select Form and click Next
 

 
Select Form on a Table or View
 

 
Select DOCUMENTS table from LOV and click Next
 

 
Enter the Page name  and click Next
 

 
Select the primary key column of the DOCUMENTS table and click Next
 

 
Primary key of the table is populated via the sequence called by the trigger
 
Select Existing trigger and click Next
 

 
Select columns to display on the form and click Next
 

 
Change the label of the Create button to Upload and hide other buttons – click Next
 

 
Select the current page as the page to branch to and click Next
 

 
Click Create
 

 
Click Edit Page
 

 
Select the P1_DOC_CONTENT item and click Edit from the menu
 

 
In the Settings section of the page add the table column names against the columns as shown above
 
Click on Apply Changes
 

 

 
Click on Run
 

 
Enter workspace or application login credentials
 

 
Click the Browse button and select the file to upload
 

 
Click Upload
 

 
In GoldenGate we check the extract and replication stats and we can see the capture and apply of the change we just made
 

GGSCI (vind-a) 3> stats extract ext9 latest

Sending STATS request to EXTRACT EXT9 ...

Start of Statistics at 2014-05-22 09:01:44.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                         0.00
        Mapped operations                                  0.00
        Unmapped operations                                0.00
        Other operations                                   0.00
        Excluded operations                                0.00

Output to /u01/app/oracle/product/st_goldengate/dirdat/xx:

Extracting from GG_OWNER.DOCUMENTS_SEQ to GG_OWNER.DOCUMENTS_SEQ:

*** Latest statistics since 2014-05-22 08:59:16 ***
        Total updates                                      1.00
        Total discards                                     0.00
        Total operations                                   1.00

Extracting from GG_OWNER.DOCUMENTS to GG_OWNER.DOCUMENTS:

*** Latest statistics since 2014-05-22 08:59:16 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

End of Statistics.

 
Connect to the target database and see if the record has been inserted.
 
Note the colums like MIME_TYPE, LAST_UPDATED, FILENAME etc are automatically populated.
 

SQL> col filename format a60
SQL> col mime_type format a30
SQL> set linesize 120
SQL> select id,filename,mime_type from documents;

        ID FILENAME                                                     MIME_TYPE
---------- ------------------------------------------------------------ ------------------------------
         1 Consultant Profile - Gavin Soorma.doc                        application/msword

Note the size of the document

SQL> select id,filename,dbms_lob.getlength(doc_content)  from documents;

        ID FILENAME                                                     DBMS_LOB.GETLENGTH(DOC_CONTENT)
---------- ------------------------------------------------------------ -------------------------------
         1 Consultant Profile - Gavin Soorma.doc                                                 532992

We will now add a new page to the application. Click Create Page

 

Select Form and click Next

 

 

Select Form on a Table with Report and click Next

 

Change the Region Title to Edit Documents and click Next

 

 

Select the table and click Next

 

Give a name for the tab for the new page we are creating  and click Next

 

 

Select the columns to include in the report and click Next

 

 

Accept default and click Next

 

 

Change the Region Title and click Next

 

Select Primary key column and click Next

 

 

Select the columns to include in the form and click Next

 

 

Click Create

 

 

Click Run Page

 

 

Click the Edit icon

 

 

Click the Edit Page link at the bottom of the page

 

 

In the Settings section of the page add the table column names as shown

Click Apply Changes and then Run

We will now download the document from the table, edit the document and upload it back into the database again

 

 

Click on the Download link and save the document

 

Open the document and we will make some changes

 

 

We will delete the “Technical Skills” table from the document, save it and then upload it back again

 

 

 

Click on Browse and upload the document which we just downloaded and edited

 

 Click Apply Changes

 

Oracle GoldenGate has applied this change and we can see that the size of the document has reduced in the target database from 532992 bytes to 528896 bytes as we had deleted some lines from the document.

Connect to the target database and issue the query

Previous:

SQL> select id,filename,dbms_lob.getlength(doc_content)  from documents;

        ID FILENAME                                                     DBMS_LOB.GETLENGTH(DOC_CONTENT)
---------- ------------------------------------------------------------ -------------------------------
         1 Consultant Profile - Gavin Soorma.doc                                                 532992


Current:
SQL> select id,filename,dbms_lob.getlength(doc_content)  from documents;

        ID FILENAME                                                     DBMS_LOB.GETLENGTH(DOC_CONTENT)
---------- ------------------------------------------------------------ -------------------------------
         1 Consultant Profile - Gavin Soorma.doc                                                 528896

We can see that the Replicat process which had earlier applied the INSERT statement when the document was uploaded to the database the first time has now applied some UPDATE statements as well

GGSCI (vind-a) 1> stats replicat rep9 latest

Sending STATS request to REPLICAT REP9 ...

Start of Statistics at 2014-05-22 10:08:47.

Replicating from GG_OWNER.DOCUMENTS to GG_OWNER.DOCUMENTS:

*** Latest statistics since 2014-05-22 08:59:20 ***
        Total inserts                                      1.00
        Total updates                                      3.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   4.00

GoldenGate and Virtual Memory – CACHEMGR CACHESIZE and CACHEDIRECTORY

$
0
0

After a recent Oracle GoldenGate installation at a client site running on Solaris 11, we observed memory related errors in the GoldenGate error log like the ones mentioned below as well as extract processes were abending on start up.

ERROR OGG-01841 CACHESIZE TOO SMALL:
ERROR OGG-01843 default maximum buffer size (2097152) > absolute maximum buffer size (0)

The Oracle database alert log also seemed to be reporting quite few “ORA-04030: out of process memory” type errors as well.

The Solaris server had 64 GB of RAM but it seemed that GoldenGate was requiring 128 GB when the extract processes were started.

Let us see why and take a look also at how GoldenGate manages memory operations.

The Oracle redo log files contain both committed as well as uncommitted changes but GoldenGate only replicates committed transactions. So it needs some kind of cache where it can store the operation of each transaction until it receives a commit or rollback for that transaction. This is particularly significant for both large as well as long-running transactions.

This cache is a virtual memory pool or global cache for all the extract and replicat processes and sub-pools are allocated for each Extract log reader thread or Replicat trail reader thread as well as dedicated sub-pools for holding large data like BLOBs.

Documentation states: “While the actual amount of physical memory that is used by any Oracle GoldenGate process is controlled by the operating system, the cache manager keeps an Oracle GoldenGate process working within the soft limit of its global cache size, only allocating virtual memory on demand.

The CACHEMGR parameter controls the amount of virtual memory and temporary disk space that is available for caching uncommitted transaction data.

The CACHEMGR CACHESIZE parameter controls the virtual memory allocations and in GoldenGate versions 11.2 onwards for a 64-bit system the CACHESIZE by default is 64 GB.

While the CACHESIZE parameter controls the Virtual Memory, if that is exceeded then GoldenGate will swap data to disk temporarily and that is by default being allocated in the dirtmp sub-directory of the Oracle GoldenGate installation directory.

The dirtmp location will contain the .cm files. The cache manager assumes that all of the free space on the file system is available and will use it to create the .cm files until it becomes full. To regulate this we can use the CACHEMGR CACHEDIRECTORY parameter and provide both a size as well assign a directory location where these .cm files will be created.

So the usage of these parameters are:

CACHEMGR CACHESIZE {size}
CACHEMGR CACHEDIRECTORY {path} {size}

The CACHESIZE as mentioned earlier on 64-bit systems defaults to 64 GB and we see that 128 GB is being used because the documentation states:

The CACHESIZE value will always be a power of two, rounded down from the value of PROCESS VM AVAIL FROM OS”

So in our case we had set the extract and replicat proceses to be started automatically by the manager on restart. These processes start simultaneously so when the first extract process started it momentarily grabbed 128 GB of memory and there was no memory left for the other extract processes to start.

So we used the CACHESIZE parameter to regulate the upper limit on which machine virtual memory can be used by GoldenGate by adding this parameter to each of the extract parameter files:

CACHEMGR CACHESIZE 8G

GoldenGate Bounded Recovery

$
0
0

The Oracle online redo log files contain both committed as well as uncommitted transactions, but Oracle GoldenGate only writes committed transactions to the trail files. So the question which can be asked is what happens to the transactions which are not committed or especially what happens to those uncommitted long running transactions.

Sometimes long running transactions in batch jobs can take several hours to complete. So until the long running transaction is not completed or committed how will GoldenGate handle the situation where an extract is reading from a particular online redo log file when the transaction starts and then with the passage of time other DML activity in the database causes that particular online redo log file to be archived – and then maybe that archive log file is not available on disk because the nightly RMAN backup job has deleted the archive log files from disk after the backup completes.

So GoldenGate has two kinds of recovery – Normal Recovery where the extract process needs all the archive log files starting from the current recovery read checkpoint of the extract and Bounded Recovery which is what we will discuss here with an example.

In very simple terms there is a Bounded Recovery (BR) Interval for an extract which by default is 4 hours and every 4 hours the extract process will makes a Bounded Recovery checkpoint. At every BR interval GoldenGate will check for any long running transactions which are older than the BR interval (which defaults to 4 hours) and writes information about the current state as well as data of the extract to disk – which again by default is the BR sub-directory in the GoldenGate software home location. This will continue at every BR interval until the long running transaction is committed or a rollback is performed.

In our extract parameter file we use the BR BRINTERVAL command :

BR BRINTERVAL 20M

 

This is what the official documentation states:

The use of disk persistence to store and then recover long-running transactions enables Extract to manage a situation that rarely arises but would otherwise significantly (adversely) affect performance if it occurred. The beginning of a long-running transaction is often very far back in time from the place in the log where Extract was processing when it stopped. A long-running transaction can span numerous old logs, some of which might no longer reside on accessible storage or might even have been deleted. Not only would it take an unacceptable amount of time to read the logs again from the start of a long-running transaction but, since long-running transactions are rare, most of that work would be the redundant capture of other transactions that were already written to the trail or discarded. Being able to restore the state and data of persisted long-running transactions eliminates that work.

 

In this example we will see how BR works by setting the BR Interval of the extract to a low value of 20 minutes and perform a INSERT statement which we do not commit. When we first issued the INSERT statement the extract process is reading from a particular online redo log (sequence #14878).

We switch some redo log files to simulate activity in the database and then backup and delete archivelog sequence 14878. We can see at every 20 minute interval Bounded Recovery is being performed and information that the extract needs about the long running transaction is being written to the BR directory on disk. Even though the archive log file is not present on disk, the extract process does not need that archive log file and uses the Bounded Recovery data which is present in the BR directory to then write data to the trail files when the long running transaction is finally committed.

We issue this INSERT statement and we will not commit the transaction - this is our test long-running transaction.

 

SQL> insert into myobjects
select object_id,object_name,object_type from dba_objects;

75372 rows created.

 

Check the online redo log sequence the extract is currently reading from – in this case it is 14878

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 2> info ext1

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:10:21 Seqno 14878, RBA 5936128
SCN 0.9137531 (9137531)

 

Using the SEND EXTRACT SHOWTRANS command, we can identify any transactions in progress or open

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 4> send ext1 showtrans

Sending SHOWTRANS request to EXTRACT EXT1 …

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 14878, RBA 116752

————————————————————
XID: 10.16.1533
Items: 75372
Extract: EXT1
Redo Thread: 1
Start Time: 2014-06-21:18:10:14
SCN: 0.9137521 (9137521)
Redo Seq: 14878
Redo RBA: 116752
Status: Running

 

The INFO EXTRACT SHOWCH command gives us more information about the extract checkpoint information. Basically the position in the source (redo log/transaction logs) where it is reading from and the position in the target (trail file) where it is currently writing to.

It shows us the redo log file (or archive log file) which the extract first read when it started up (Startup Checkpoint) which is 14861.

It shows us the position of the oldest unprocessed transaction in the online redol log /archive redo logs files (Recovery Checkpoint) which is 14878 SCN and SCN 9137521.

Finally it shows us the current position in the online redo log file in terms of SCN where the extract last read a record (Current Checkpoint) which is sequence 14878 but the SCN had advanced to 9137612 because of some other activity in the database.

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 5> info ext1 showch

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:06 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:11:41 Seqno 14878, RBA 5977088
SCN 0.9137612 (9137612)

Current Checkpoint Detail:

Read Checkpoint #1

Oracle Redo Log

Startup Checkpoint (starting position in the data source):
Thread #: 1
Sequence #: 14861
RBA: 5918224
Timestamp: 2014-06-21 16:49:33.000000
SCN: 0.9129707 (9129707)
Redo File: /u01/app/oracle/fast_recovery_area/GGATE1/archivelog/2014_06_21/o1_mf_1_14861_9tbo7pys_.arc

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 14878
RBA: 116752
Timestamp: 2014-06-21 18:10:14.000000
SCN: 0.9137521 (9137521)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 14878
RBA: 5977088
Timestamp: 2014-06-21 18:11:41.000000
SCN: 0.9137612 (9137612)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

Write Checkpoint #1

GGS Log Trail

Current Checkpoint (current write position):
Sequence #: 3
RBA: 8130790
Timestamp: 2014-06-21 18:11:44.414364
Extract Trail: ./dirdat/zz
Trail Type: RMTTRAIL

 

After some time (more than 20 minutes) we issue the same SHOWCH command and let us look at the differences we see in the output of the command as compared to the previous SHOWCH.

We can see that because of database activity the extract is now reading from the online redo log sequence 14884 (earlier it was 14878).

But what has remained unchanged is the Recovery Checkpoint which is the oldest redo log sequence that the extract needs to access when the long running transaction which is currently in progress is finally committed.

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 2> info ext1 showch

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:04 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:40:34 Seqno 14884, RBA 72704
SCN 0.9139491 (9139491)

Current Checkpoint Detail:

Read Checkpoint #1

Oracle Redo Log

Startup Checkpoint (starting position in the data source):
Thread #: 1
Sequence #: 14861
RBA: 5918224
Timestamp: 2014-06-21 16:49:33.000000
SCN: 0.9129707 (9129707)
Redo File: /u01/app/oracle/fast_recovery_area/GGATE1/archivelog/2014_06_21/o1_mf_1_14861_9tbo7pys_.arc

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 14878
RBA: 116752
Timestamp: 2014-06-21 18:10:14.000000
SCN: 0.9137521 (9137521)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 14884
RBA: 72704
Timestamp: 2014-06-21 18:40:34.000000
SCN: 0.9139491 (9139491)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

 

We also see important information related to the Bounded Recovery (BR) Checkpoint via the INFO EXTRACT SHOWCH command.

As earlier mentioned we had changed the BR interval for this example to 20 minutes from the default value of 4 hours, so every 20 minutes at the BR interval (in this case will 18:07, 18:27. 18:47 and so on ), information about the current state and data of the extract wil be written to disk in the BR sub-directory.

So we see that at 18:27 BR interval, the BR checkpoint had persisted information from the redo log sequence 14881 to disk. So if there was a failure or if the extract was restarted, it will not need any redo log files or archive log files prior to sequence 14881

 

BR Previous Recovery Checkpoint:
Thread #: 0
Sequence #: 0
RBA: 0
Timestamp: 2014-06-21 18:07:35.982719
SCN: Not available
Redo File:

BR Begin Recovery Checkpoint:
Thread #: 0
Sequence #: 14878
RBA: 116752
Timestamp: 2014-06-21 18:10:14.000000
SCN: 0.9137521 (9137521)
Redo File:

BR End Recovery Checkpoint:
Thread #: 1
Sequence #: 14881
RBA: 139776
Timestamp: 2014-06-21 18:27:38.000000
SCN: 0.9138688 (9138688)
Redo File:

 

We can see that some files have been created in the BR directory for extract EXT1

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 4> info ext1

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:06 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:41:35 Seqno 14884, RBA 131072
SCN 0.9139583 (9139583)

GGSCI (kens-orasql-001-dev.corporateict.domain)

GGSCI (kens-orasql-001-dev.corporateict.domain) 3> shell ls -l ./BR/EXT1

total 20
-rw-r—– 1 oracle oinstall 65536 Jun 21 18:27 CP.EXT1.000000015
drwxr-x— 2 oracle oinstall 4096 Jun 19 17:07 stale

 

So what happens if we delete the old archive log sequence 14878 from disk. Since the BR checkpoint has already persisted information about the long running transaction which was contained in 14878 sequence to disk, it should not need to access this older archive log file.

To test this we take a backup of archive log sequence 14878 and then delete it. Remember this was the redo log sequence the extract was writing to when the long-running transaction first started.

 

RMAN> backup archivelog sequence 14878 delete input;

Starting backup at 21-JUN-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=14878 RECID=30497 STAMP=850846396
channel ORA_DISK_1: starting piece 1 at 21-JUN-14
channel ORA_DISK_1: finished piece 1 at 21-JUN-14
piece handle=/u01/app/oracle/fast_recovery_area/GGATE1/backupset/2014_06_21/o1_mf_annnn_TAG20140621T234659_9tcb7msp_.bkp tag=TAG20140621T234659 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/GGATE1/archivelog/2014_06_21/o1_mf_1_14878_9tbpowlm_.arc RECID=30497 STAMP=850846396
Finished backup at 21-JUN-14

 

Let us now finally commit the long-running transaction.

 

SQL> insert into myobjects
2 select object_id,object_name,object_type from dba_objects;

75372 rows created.

SQL> commit;

Commit complete.

 

In the Extract EXT1 report, we can see information about the long-running transaction as well as the Bounded Recovery Checkpoint and we can see that every 20 minutes the redo log sequence for which the Bounded Recovery Checkpoint is happening is getting incremented.

 

2014-06-21 18:17:42 WARNING OGG-01027 Long Running Transaction: XID 10.16.1533, Items 75372, Extract EXT1, Redo Thread 1, SCN 0.9137521 (9137521), Redo Seq #14878, R
edo RBA 116752.

2014-06-21 18:27:41 INFO OGG-01971 The previous message, ‘WARNING OGG-01027′, repeated 1 times.

2014-06-21 18:27:41 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p23540_extr: start=SeqNo: 14878, RBA: 116752, SCN: 0.9137521 (9137521), Timest
amp: 2014-06-21 18:10:14.000000, end=SeqNo: 14881, RBA: 139776, SCN: 0.9138688 (9138688), Timestamp: 2014-06-21 18:27:38.000000, Thread: 1.

2014-06-21 18:47:50 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p23540_extr: start=SeqNo: 14885, RBA: 144912, SCN: 0.9139983 (9139983), Timest
amp: 2014-06-21 18:47:47.000000, Thread: 1, end=SeqNo: 14885, RBA: 145408, SCN: 0.9139983 (9139983), Timestamp: 2014-06-21 18:47:47.000000, Thread: 1.

2014-06-21 19:07:59 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p23540_extr: start=SeqNo: 14889, RBA: 176144, SCN: 0.9141399 (9141399), Timest
amp: 2014-06-21 19:07:56.000000, Thread: 1, end=SeqNo: 14889, RBA: 176640, SCN: 0.9141399 (9141399), Timestamp: 2014-06-21 19:07:56.000000, Thread: 1.

 

So a point to keep in mind:

If we are using the Bounded Recovery interval with the default value of 4 hours, then ensure that we keep on disk at least at a minimum archive log files for the past 8 hours to cater for anylong running transactions.

GoldenGate Director Security

$
0
0

The GoldenGate Director (Server and Client) is part of the Oracle GoldenGate Management pack suite of products.

Let us see how security is managed in the Director.

We launch the Director Administration tool on Unix via the run-admin.sh shell script.

If we are using Oracle WebLogic Server 12c and above the default admin user is ‘diradmin’ and for other releases it is ‘admin’.

When we create a user via the Director Admin tool it creates a WebLogic domain user in the backround and we will see this in the example when we connect using the WebLogic Administration Console.
 
 


   

After creating a user we then have to create a Data Source and here is where we define the security layer.

A Data Source is essentially where we define the connection details to a particular instance of GoldenGate like the manager port and host where the manager is running, the GoldenGate version and operating system and also the database username and password used by the GoldenGate schema.

In the Access Control section of the interface screen, we have a few options.

If we leave the Owner field blank, then it means that the Data Source in the Director Client will be visible as well as manageable by all other admin users.

If we explicitly define an owner for the Data Source by selecting one of the users we had earlier created (or the default out of the box users like diradmin or admin), then the Data Source in the Director Client will only be visible to that particular user. If another user connects to Director Client, they will not see that Data Source.

The next option is to define an owner for the Data Source and click the Host is Observable check box. That means that users other than the owner will be able to see the Data Source in Director Client and will be able to see the extract and replicat processes associated with that Data Source , but will not be able to perform any administrative type activity like start or stop extract/replicat, modify parameter files or even use the GGSCI interface to connect to the Golden Gate instance associated with that particular Data Source.
 


 

What happens if we want some more fine grained access control in the director security and control which Data Sources are visisble as well as manageable by which Director Admin users. We do this at the WebLogic end of things. Remember when we install the GoldenGate Director, we need to have an existing WebLogic Server environment and a domain for GoldenGate Director is created and managed by that WebLogic Server.

We have two admin users usera and userb which we have created using the Director Admin utility. We do not want usera to be able to perform any administrative type tasks in the GoldenGate environment via the Director Client but should just be able to view the environment while userb has full access.

We launch the WebLogic Server Administration Console (note the out-of-box usernane and password is weblogic)

If we click on the Security Realms link, we see that the installation has created a realm called ggRealm.
 


  

Click on ggRealm link and expand Users and Groups tab. We will see a list of weblogic users. We had earlier created admin users (usera and userb) in Director Administration utility and we see that a WebLogic Server users havealso been created as well.

Let us see the groups this user usera is currently a member of – in this case only chosen group for usera is the group User.
 

Now connect as usera using the Director Client.
 


   

We can see that while the Data Sources are visible, they have a lock symbol attached to them meaning that usera can only see the processes associated with the Data Source when he drags the data source to the Diagram panel. He cannot create, modify, start or stop any of the extract or replicat processes associated with that Data Source.

Even in GGSCI tab, we see that he cannot connect to any of the associated GoldenGate instances as none are available.
 


   

Go back to the WebLogic Administration Console and make userb a member of the Admin group.
 


   

Now when we connect as userb in the Director Client, all the Data Sources are visible and none are locked and if we use the GGSCI tab we can see in the drop-down list we can connect to all the Data Sources via GGSCI
 


 

Platform Migration and Database Upgrade from Oracle 9i to Oracle 11g using GoldenGate

$
0
0

Let us look at an example of using Oracle Golden Gate to achieve a near zero (not zero!) downtime for performing an upgrade from Oracle 9i (9.2.0.5) to Oracle 11g (11.2.0.4) as well as a platform migration from Solaris SPARC to Linux X86-64.

With no downtime for the application we have performed the following tasks:

  •  Installed Oracle GoldenGate on both source and target servers. (On source for the Oracle 9i environment we are using OGG 11.1.1.1.4 and on the target Oracle 11g environment we are using OGG 11.2.1.0.3)
  • Supplemental logging has been turned on at the database level for the source database
  • Supplemental logging has been enabled at the table level using the ADD TRANDATA or ADD SCHEMATRANDATA GoldenGate commands
  • Extract DDL capture has been enabled on the source
  • Configured the Manager process on both source and target
  • Created the Extract process on source
  • Created the Replicat process on target
  • Installed the 11.2.0.4 Oracle software and created the target 11g database with the required tablespaces and database parameters as the source database.Remember some parameters in Oracle 9i have been deprecated in 11g and certain new parameters have been added.

We need to be able to capture all changes in the database while the Oracle 9i database export is in progress. So we will start the capture Extract process or processes BEFORE we start the full database export.

We also then use the DBMS_FLASHBACK package to obtain the reference SCN number on which the consistent database export will be based. Changes which occur in the database post this SCN will not be captured in the export dump file but will be captured by the Golden Gate Extract process on source and applied by the Replicat process on the target.

Let us look at an example.

We have created a user called MIG_TEST and created some objects in this schema.

SQL> create user mig_test
  2  identified by mig_test
  3  default tablespace users
  4  temporary tablespace temp;

User created.

SQL> grant dba to mig_test;

Grant succeeded.

SQL> conn  mig_test/mig_test
Connected.
SQL> create table mytables as select * from all_tables;

Table created.

SQL> create table myindexes as select * from all_indexes;

Table created.




SQL> alter table mytables
  2  add constraint pk_mytables primary key (owner,table_name);

Table altered.

SQL> alter table myindexes
  2  add constraint pk_myindexes primary key (owner,index_name);

Table altered.


SQL> create table myobjects as select * from all_objects;

Table created.


SQL>  alter table myobjects
  2   add constraint pk_myobjects primary key (owner,object_name,object_type);

Table altered.

Obtain the current SCN on the source database and perform the full database export

SQL> SELECT dbms_flashback.get_system_change_number as current_scn
     from dual;

CURRENT_SCN
-----------
      63844


$ exp file=/app/oracle/oradump/testdb/exp/exp_mig.dmp full=y flashback_scn=63844 log= exp_mig.log

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Username: system
Password:

Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting PUBLIC type synonyms
. exporting private type synonyms

....

............

While the export is in progress, we make some changes to the objects in the MIG_TEST schema

SQL> update myobjects set object_type ='INDEX' where owner='MIG_TEST';

6 rows updated.

SQL> commit;

Commit complete.


SQL> delete mytables;

465 rows deleted.

SQL> commit;

Commit complete.


We can see that the MIG_TEST tables have been exported. But note that the last changes we made will not be part of the export as they were occurring in the database after the SCN 64844 which is the SCN the consistent export was based on.

So the MYTABLES table still has the 465 rows included in the export dump file even though we just deleted all the rows from the table.

 about to export MIG_TEST's tables via Conventional Path ...
. . exporting table                      MYINDEXES        474 rows exported
. . exporting table                      MYOBJECTS       5741 rows exported
. . exporting table                       MYTABLES        465 rows exported

On the 11.2.0.4 target database we perform the full database import.

Note the MYTABLES table still has 465 rows as we issued the DELETE statement after the export was started in the source database

. importing MIG_TEST's objects into MIG_TEST
. . importing table                    "MYINDEXES"        474 rows imported
. . importing table                    "MYOBJECTS"       5742 rows imported
. . importing table                     "MYTABLES"        465 rows imported

After the import has completed we now start the Replicat process on the target

Note we are using the AFTERSCN clause to tell the replicat to only apply all thos changes on the target which were generated on the source database after the SCN 63844

GGSCI (LINT0004) 4>  start replicat repmig aftercsn  63844

Sending START request to MANAGER ...
REPLICAT REPMIG starting


GGSCI (LINT0004) 5> info replicat repmig

REPLICAT   REPMIG    Last Started 2014-12-17 14:07   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Log Read Checkpoint  File ./dirdat/cc000000
                     2014-12-17 13:54:40.150229  RBA 2586343

We can see that the replicat process has applied the required UPDATE and DELETE statements which were captured in the OGG trail file

GGSCI (LINT0004) 6> stats replicat repmig latest

Sending STATS request to REPLICAT REPMIG ...

Start of Statistics at 2014-12-17 14:08:09.

Replicating from MIG_TEST.MYOBJECTS to MIG_TEST.MYOBJECTS:

*** Latest statistics since 2014-12-17 14:07:33 ***
        Total inserts                                      0.00
        Total updates                                      6.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   6.00

Replicating from MIG_TEST.MYTABLES to MIG_TEST.MYTABLES:

*** Latest statistics since 2014-12-17 14:07:33 ***
        Total inserts                                      0.00
        Total updates                                      0.00
        Total deletes                                    465.00
        Total discards                                     0.00
        Total operations                                 465.00

 

We will now verify that there is no lag in the Replicat process and the source and target databases are in sync.

At this stage the outage will commence for the application.

We stop the extract and replicat processes and will need to disconnect application users who were connected to the original 9i database and point the application now to connect to the new Oracle 11g database.

The duration of the application outage will depend on how fast we can perform the disconnection of the users and reconfiguration of the application to connect to the upgraded database.


Oracle Goldengate 12c on DBFS for RAC and Exadata

$
0
0

Let us take a look at the process of configuring Goldengate 12c to work in an Oracle 12c Grid Infrastructure RAC or Exadata environment using DBFS on Linux x86-64.

Simply put the Oracle Database File System (DBFS) is a standard file system interface on top of files and directories that are stored in database tables as LOBs.

In one of my earlier posts we had seen how we can configure Goldengate in an Oracle 11gR2 RAC environment using ACFS as the shared location.

Until recently Exadata did not support using ACFS but ACFS is now supported on version 12.1.0.2 of the RAC Grid Infrastructure.

In this post we will see how the Oracle DBFS (Database File System) will be setup and configured and used as the shared location for some of the key Goldengate files like the trail files and checkpoint files.

In summary the broad steps involved are:

1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility

Install and Configure FUSE

Using the following command check if FUSE has been installed:

lsmod | grep fuse

FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.

Using Yum:

yum install kernel-devel
yum install fuse fuse-libs

Via RPM’s:

If installing from the media, then these are the RPM’s which are required:

kernel-devel-2.6.32-358.el6.x86_64.rpm
fuse-2.8.3-4.el6.x86_64.rpm
fuse-devel-2.8.3-4.el6.x86_64.rpm
fuse-libs-2.8.3-4.el6.x86_64.rpm

A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.

For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.

# /usr/sbin/groupadd fuse

# usermod -G dba,fuse oracle

One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.

The /etc/fuse.conf needs to have the “user_allow_other” option as shown below.

$ # cat /etc/fuse.conf
user_allow_other

chmod 644 /etc/fuse.conf

Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.

Create the DBFS tablespaces and mount the DBFS

If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.

If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.

If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.

Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12 hours of trail files. So we need to keep that in mind when we create the ASM disk group or create the DBFS tablespace.

CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE
1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

Create the DBFS user

CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd 
DEFAULT TABLESPACE dbfs_ogg_big
QUOTA UNLIMITED ON dbfs_ogg_big;

GRANT create session, create table, create view, 
create procedure, dbfs_role TO dbfs_user; 


Create the DBFS Filesystem

To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.

For example:

cd $ORACLE_HOME/rdbms/admin 

sqlplus dbfs_user/dbfs_pswd 


SQL> @dbfs_create_filesystem dbfs_ogg_big  gg_source

OR

SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big  gg_source
      nocompress nodeduplicate noencrypt non-partition 

Where …
o dbfs_ogg_big: tablespace for the DBFS database objects
o gg_source: filesystem name, this can be any string and will appear as a directory under the mount point

If we were configuring DBFS on the Goldengate target or Replicat side of things,it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.

For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.

The example shown below illustrates how we can modify the LOB attributes.(assuming we have created two DBFS tablespaces)

SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%'; 

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             NO        YES



SQL> ALTER TABLE dbfs_user.T_DBFS_SM 
     MODIFY LOB (FILEDATA) (CACHE LOGGING); 


SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%';  

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             YES       YES


As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).


# cd /mnt 
# mkdir DBFS 
# chown oracle:oinstall DBFS/

Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).

In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.

Node A

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl1)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl1')
      )
  (CONNECT_DATA=(SID=orcl1))
)

Node B

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl2)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl2')
      )
  (CONNECT_DATA=(SID=orcl2))
)


 

We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.

In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.

For example:


echo dbfs_pswd > passwd.txt 

nohup $ORACLE_HOME/bin/dbfs_client dbfs_user@orcl -o allow_other,direct_io /mnt/DBFS < ~/passwd.txt &

After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.


$  df -h |grep dbfs

dbfs-dbfs_user@:/     4.9G   11M  4.9G   1% /mnt/dbfs

The command used to unmount the DBFS filesystem would be:

fusermount -u 

Create links from Oracle Goldengate software directories to DBFS

Create the following directories on DBFS

$ mkdir /mnt/gg_source/goldengate 
$ cd /mnt/gg_source/goldengate 
$ mkdir dirchk
$ mkdir dirpcs 
$ mkdir dirprm
$ mkdir dirdat
$ mkdir BR

Make the symbolic links from Goldengate software directories to DBFS

cd /u03/app/oracle/goldengate
mv dirchk dirchk.old
mv dirdat dirdat.old
mv dirpcs dirpcs.old
mv dirprm dirprm.old
mv BR BR.old
ln -s /mnt/dbfs/gg_source/goldengate/dirchk dirchk
ln -s /mnt/dbfs/gg_source/goldengate/dirdat dirdat
ln -s /mnt/dbfs/gg_source/goldengate/dirprm dirprm
ln -s /mnt/dbfs/gg_source/goldengate/dirpcs dirpcs
ln -s /mnt/dbfs/gg_source/goldengate/BR BR

For example :

[oracle@rac2 goldengate]$ ls -l dirdat
lrwxrwxrwx 1 oracle oinstall 26 May 16 15:53 dirdat -> /mnt/dbfs/gg_source/goldengate/dirdat

Also copy the jagent.prm file which comes out of the box located in the dirprm directory

[oracle@rac2 dirprm.old]$ pwd
/u03/app/oracle/goldengate/dirprm.old
[oracle@rac2 dirprm.old]$ cp jagent.prm /mnt/dbfs/gg_source/dirprm

Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory

BR BRDIR /mnt/dbfs/gg_source/goldengate/BR
 

Create the Application VIP

Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.

Run the appvipcfg command to create the Application VIP as shown in the example below.


$GRID_HOME/bin/appvipcfg create -network=1 -ip= 192.168.56.90 -vipname=gg_vip_source -user=root

We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.

$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet

NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0

As root give the Oracle Database software owner permissions to start the VIP.

$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x 

As the Oracle database software owner start the VIP

$GRID_HOME/bin/crsctl start resource gg_vip_source

Verify the status of the Application VIP


$GRID_HOME/bin/crsctl status resource gg_vip_source

 

Download the mount-dbfs.sh script from MOS

Download the mount-dbfs.sh script from MOS note 1054431.1.

Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:

# dos2unix /tmp/mount-dbfs.sh

Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.

Next make changes to the environment variable settings section of the mouny-dbfs.sh script as required. These are the changes I made to the script.

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
DBNAME=orcl

### Mount point where DBFS should be mounted
MOUNT_POINT=/mnt/dbfs

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1

### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=dbfs_user

### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS
TNS_ADMIN=/u02/app/oracle/admin

### TNS alias used for mounting with wallets
DBFS_LOCAL_TNSALIAS=orcl

Create the DBFS Cluster Resource

Before creating the Cluster Resource for DBFS,test the mount-dbfs.sh script

$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
Checking status now
Check – ONLINE

$ ./mount-dbfs.sh stop

As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.

This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.

Edit the following variables in the script as shown below:

ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME

#!/bin/bash
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DEPNAME=ora.orcl.db
ORACLE_HOME=/u01/app/12.1.0.2/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard($DEPNAME)pullup($DEPNAME)',\
STOP_DEPENDENCIES='hard($DEPNAME)',\
SCRIPT_TIMEOUT=300"

Execute the script – it should produce no output.

./ add-dbfs-resource.sh

 

Download and Install the Oracle Grid Infrastructure Bundled Agent

Starting with Oracle 11.2.0.3 on 64-bit Linux,out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.

The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.

The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.

Download the latest version of the agent (6.1) from the URL below:

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

The downloaded file will be xagpack_6.zip.

There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory. We need to install the new bundled agent in a separate directory and ensure the $PATH includes

Unzip the xagpack_6.zip in a temporary location on one of the RAC nodes.

To install the Oracle Grid Infrastructure Agents we run the xagsetup.sh script as shown below:

xagsetup.sh --install --directory [{–nodes | –all_nodes}]

Register Goldengate with the bundled agents using agctl utility

Using agctl utility create the GoldenGate configuration.

Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.

/home/oracle/xagent/bin/agctl add goldengate gg_source --gg_home /u03/app/oracle/goldengate \
--instance_type source \
--nodes rac1,rac2 \
--vip_name gg_vip_source \
--filesystems dbfs_mount --databases ora.orcl.db \
--oracle_home /u02/app/oracle/product/12.1.0/dbhome_1 \
--monitor_extracts ext1,extdp1
 

Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.

Let us look at some examples of using agctl.

Check the Status – note the DBFS filesystem is also mounted currently on node rac2

$ pwd
/home/oracle/xagent/bin
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2


$ cd /mnt/dbfs/
$ ls -lrt
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

Stop the Goldengate environment

$ ./agctl stop goldengate gg_source 
$ ./agctl status goldengate gg_source
Goldengate  instance ' gg_source ' is not running

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     EXT1        00:00:03      00:01:19
EXTRACT     STOPPED     EXTDP1      00:00:00      00:01:18

Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.

$ ./agctl start goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac1

GGSCI (rac2.localdomain) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (rac1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:09      00:00:06
EXTRACT     RUNNING     EXTDP1      00:00:00      00:05:22

We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.

[oracle@rac1 goldengate]$ ls -l /mnt/dbfs
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

[oracle@rac2 goldengate]$ ls -l /mnt/dbfs
total 0

Lets test the whole thing!!

Now that we see that the Goldengate resources are running on node rac1,let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.

AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.

[oracle@rac1 goldengate]$ su -
Password:
[root@rac1 ~]# shutdown -h now

Broadcast message from oracle@rac1.localdomain
[root@rac1 ~]#  (/dev/pts/0) at 19:45 ...

The system is going down for halt NOW!

Connect to the surviving node rac2 and check ……

[oracle@rac2 bin]$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:07      00:00:02
EXTRACT     RUNNING     EXTDP1      00:00:00      00:00:08

Check the Cluster Resource ….

oracle@rac2 bin]$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

GoldenGate 12c (12.2) New Features

$
0
0

At the recent Oracle Open World 2015 conference I was fortunate to attend a series of very informative presentations on Oracle GoldenGate from senior members of the Product Development team.

Among them was the presentation titled GoldenGate 12.2 New Features Deep Dive which is now available for download via the official OOW15 website.

While no official release date was announced for Goldengate 12.2, the message was being communicated that the release was going to happen ‘very soon’.

So while we eagerly wait for the official product release, here are some of the new 12.2 features which we can look forward to.

 

No more usage of SOURCEDEFS and ASSUMETARGETDEFS parameter –Metadata included as part of Trail File

In earlier versions if the structure of the table between the source and target database was different in terms of column names, data types and even column positions (among other things), we had to create a flat file which contained the table definitions and column mapping via the DEFGEN utility. Then we had to transfer this file to the target system.

If we used the parameter ASSUMETARGETDEFS, the assumption was that the internal structure of the target tables was the same as the source – which was not always the case – and we encountered issues

Now in 12.2, GoldenGate Trail Files are Self-Describing. Metadata information is included in the Trail Files called Table Definition Record (TDR) before the first occurrence of DML on that particular table and this TDR contains the table and column definition like the column number, data type, column length etc .

For new installations which will use the GoldenGate 12.2 software, metadata gets automatically populated in trail files by default. For existing installations we can use the parameter FORMAT RELEASE 12.2 and then any SOURCEDEFS or ASSUMETARGETDEFS parameters are no longer required or are ignored.

 

Automatic Heartbeat Table

In earlier versions, one of the recommendations to monitor lag was to create a heartbeat table.

Now in 12.2, there is a built-in mechanism to monitor replication lag. There is a new GGSCI command called ADD HEARTBEATTABLE .

This ADD HEARTBEATTABLE will automatically create the heartbeat tables and views as well as database jobs which updates heartbeat tables every 60 seconds.

One of the views created is called GG_LAG and it contains columns like INCOMING_LAG which will show the period of time between a remote database generating heartbeat and a local database receiving heartbeat.

Similarly to support an Active-Active Bi-Directional GoldenGate configuration, there is also a column called OUTGOING_LAG which is the period of time between local database generating heartbeat and remote database receiving heartbeat.

The GG_HEARTBEAT table is one of the main tables on which other heartbeat views are built and it will contain lag information for each component – Extract, Pump as well as Replicat. So we can quite easily identify where the bottleneck is when faced with diagnosing a GoldenGate performance issue.

Historical heartbeat and lag information is also maintained in tables like GG_LAG_HISTORY and GG_HEARTBEAT_HISTORY tables.

 

Parameter Files – checkprm , INFO PARAM, GETPARAMINFO

A new utility is available in 12.2 called checkprm which can be used to validate parameter files before they are deployed.

The INFO PARAM command will give us a lot of information about a particular parameter – like what is the default value and what are valid range of values. It is like accessing the online documentation from the GGSCI command line.

When a process like replicat or extract is running, we can use the SEND [process] GETPARAMINFO command to identify the runtime parameters – not only parameters included in the process parameter file, but also any other parameters the process has accessed which are say not included in the parameter file. Sometimes we are not aware of the many default parameters a process will use and this command will show this information real-time while the extract or replicat or manager is up and running.

 

Transparent Integration with Oracle Clusterware

In earlier releases, when we used the Grid Infrastructure Agent (XAG) to provide high availability capability for Oracle GoldenGate, we had to use the AGCTL to manage the GoldenGate instance like stop and start. If we used the GGSCI commands to start or stop the manager it could cause issues and the recommendation was to only use AGCTL and not GGSCI in that case.

Now in 12.2, once the GoldenGate instance has been registered with Oracle Clusterware using AGCTL, we can then continue to use GGSCI to start and stop GoldenGate without concern of any issues arising because AGCTL was not used. A new parameter for the GLOBALS file is now available called XAG_ENABLE.

 

Integration of GoldenGate with Datapump

In earlier releases when we added new tables to an existing GoldenGate configuration, we had to obtain the CURRENT_SCN from v$DATABASE view, pass that SCN value to the FLASHBACK_SCN parameter of expdp and then when we started the Replicat we had to use the AFTERCSN parameter with the same value.

Now in 12.2, the ADD TRANDATA or ADD SCHEMATRANDATA will prepare the tables automatically. Oracle Datapump export (expdp) will automatically generate import actions to set the instantiation CSN when that table is imported. We just have to include the new parameter for the Replicat called DBOPTIONS_ENABLE_INSTANTIATION_FILTERING which will then filter out any DML or DDL records based on the instantiation CSN of that table.

 

Improved Trail File Recovery

In earlier releases if a trail file was missing or corrupt, the Replicat used to abend.

Now in 12.2, if we have a corrupted or missing trail file, we can delete the corrupted trail file and the trail file is rebuilt by restarting the Extract Pump – the same is the case for a missing trail file which can be automatically rebuilt by bouncing the Extract Pump process. Replicat will automatically filter duplicate transactions by default to transactions already applied in the regenerated trail files.

 

Support for INVISIBLE Columns

The new MAPINVISIBLECOLUMNS parameter in 12.2 now enables replication support for tables (Oracle database only ) which contained any such INVISIBLE columns.

 

Extended Metrics and Fine-grained Performance Monitoring

Release 12.2 now provides real-time process and thread level Metrics for Extract, Pump and Replicat which  can be accessed through RESTful Web Services.  Real time database statistics for Extract and Replicat, Queues, as well as network statistics for the Extract Pump can be accessed using a URL like:

http://<hostname>:<manager port>/mpointsx

ENABLEMONITORING parameter needs to be included in the GLOBALS file.

The Java application is also available for free download (and can also be modified and customised) via the URL:

https://java.net/projects/oracledi/downloads/download/GoldenGate/OGGPTRK.jar

 

GoldenGate Studio

New in Release 12.2 is GoldenGate Studio – a GUI tool which will enable us to quickly design and deploy GoldenGate solutions. It separates the logical from the physical design and enables us to create a one-click and drag and drop logical design based on business needs without knowing all the details.

It has a concept of Projects and Solutions where one Project could contain a number of solutions and Solution contains one logical design and possibly many physical deployments. Rapid design is enabled with a number of out of the box Solution templates like Cascading, Bi-Directional, Unidirectional, Consolidation etc.

GoldenGate Studio enables us to design once and deploy it to many environments like Dev,Test, QA and Production with one click deployment.

 

GoldenGate Cloud Service

GoldenGate Cloud Service is the public cloud-based offering on a Subscription or Hourly basis.

The GoldenGate Cloud Service provides the delivery mechanisms to move Oracle as well as non-Oracle databases from On Premise to DBaaS – Oracle Database Cloud Service as well as Exadata Cloud Service delivery via GoldenGate. GoldenGate Cloud Service also provides Big Data Cloud Service delivery to Hadoop and NoSQL.

 

Nine Digit Trail File Sequence Length

In 12.2, the default is to create trail files with 9 digit sequence numbers instead of the earlier 6 digit sequence. This now will allow 1000 times more files per trail – basically 1 billion files per trail!.

We can upgrade existing trail files from 6 to 9 digit sequence numbers using a utility called convchk and there is also backward compatibility support for existing 6 digit sequences using a GLOBAL parameter called TRAIL_SEQLEN_6D.

Goldengate 12.2 New Feature Self-describing Trail Files

$
0
0

One of the top new features introduced in Oracle GoldenGate 12.2 is the Self-describing trail files feature.

What this means is that no more do we have to worry about differences in table structures in the source and target databases and no more do we have to use the defgen utility or even the parameters ASSUMETARGETDEFS or SOURCEDEFS which we had to do in the earlier releases.

So many of the manual steps have been eliminated.

Now GoldenGate 12.2 supports replication even if source and target have different structures or different databases for that matter.

Metadata information is now contained in the trail files!

We will have a look at this in more detail in our example below, but now the trail files contains two important pieces of information – Data Definition Record (DDR) and Table Definition Record (TDR).

Each trail file contains a Database Definition Record (DDR) before first occurrence of a DML record or a SEQUENCE from a particular database. The DDR contains database specific information like characterset, database name, type of database etc.

Also each trail file contains a Table Definition Record (TDR) before first occurrence of a DML record for a particular table and this TDR section will have the table and column definition and metadata  including column number, data types, column lengths and so on.

Example

Let us now create a test table on both the source as well as target database with different column names.

 

Source


SQL> create table system.test_ogg
  2  (emp_id number, first_name varchar2(20), last_name varchar2(20));

Table created.

SQL> alter table system.test_ogg
  2  add constraint pk_test_ogg primary key (emp_id);

Table altered.

 

Target

 

SQL> create table system.test_ogg
2 (emp_id number,f_name varchar(20),l_name varchar2(20));

Table created.

SQL> alter table system.test_ogg
2 add constraint pk_test_ogg primary key (emp_id);

Table altered.

 

Create the Extract and Pump processes on the source
 
Source

 

host1>./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Nov 11 2015 03:53:23
Operating system character set identified as UTF-8.



GGSCI (host1 as oggsuser@DB01) 5> add extract etest integrated tranlog begin now                                                                                   
EXTRACT (Integrated) added.


GGSCI (host1 as oggsuser@DB01) 6> add exttrail ./dirdat/auxdit/lt extract                                                                                        etest
EXTTRAIL added.


GGSCI (host1 as oggsuser@DB01) 9> add extract ptest  exttrailsource ./dir                                                                                        dat/auxdit/lt
EXTRACT added.

GGSCI (host1 as oggsuser@DB01) 11> add rmttrail ./dirdat/bsstg/rt extract ptest
RMTTRAIL added.


GGSCI (host1 as oggsuser@DB01) 10> register extract etest database

2015-12-21 05:09:33  INFO    OGG-02003  Extract ETEST successfully registered with database at SCN 391450385.

 

Extract and Pump Parameter files


extract etest

USERIDALIAS oggsuser_bsstg

LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT


TRANLOGOPTIONS EXCLUDEUSER OGGSUSER
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 2048, parallelism 2)

EXTTRAIL ./dirdat/auxdit/lt

WARNLONGTRANS 2h, CHECKINTERVAL 30m
REPORTCOUNT EVERY 15 MINUTES, RATE
STATOPTIONS  RESETREPORTSTATS
REPORT AT 23:59
REPORTROLLOVER AT 00:01 ON MONDAY
GETUPDATEBEFORES

TABLE SYSTEM.TEST_OGG;



EXTRACT ptest

USERIDALIAS oggsuser_bsstg

RMTHOST host2,  MGRPORT 7809 TCPBUFSIZE 200000000, TCPFLUSHBYTES 200000000, compress

RMTTRAIL ./dirdat/bsstg/rt

PASSTHRU

REPORTCOUNT EVERY 15 MINUTES, RATE

TABLE SYSTEM.TEST_OGG;

On the target create and start the replicat process

 
Target

 

GGSCI (host2) 2> add replicat rtest integrated exttrail ./dirdat/bsstg/rt
REPLICAT (Integrated) added.

 

Replicat parameter file – note NO parameter ASSUMETARGETDEFS


REPLICAT rtest

SETENV (ORACLE_HOME="/orasw/app/oracle/product/12.1.0/db_1")
SETENV (TNS_ADMIN="/orasw/app/oracle/product/12.1.0/db_1/network/admin")
SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")

USERIDALIAS oggsuser_auxdit


MAP SYSTEM.TEST_OGG, TARGET SYSTEM.TEST_OGG;

Start the Extract, Pump and Replicat processes
 

Source

 

GGSCI (host1 as oggsuser@DB01) 15> start manager
Manager started.


GGSCI (host1 as oggsuser@DB01) 16> start etest
EXTRACT ETEST starting


GGSCI (host1 as oggsuser@DB01) 17> start ptest

Sending START request to MANAGER ...
EXTRACT PTEST starting

 

Target

 

GGSCI (host2) 3> start rtest

Sending START request to MANAGER ...
REPLICAT RTEST starting


GGSCI (host2) 4> info rtest

REPLICAT   RTEST     Last Started 2015-12-21 05:21   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:08:53 ago)
Process ID           29864
Log Read Checkpoint  File ./dirdat/bsstg/rt000000000
                     First Record  RBA 0


 

On the source database insert a row into the TEST_OGG table

 

Source

 

SQL> insert into system.test_ogg
  2   values
  3   (007, 'JAMES','BOND');

1 row created.

SQL> commit;

Commit complete.

 

On the target we can see that the change has been replicated

 

Target

 

GGSCI (host2) 5> stats rtest latest

Sending STATS request to REPLICAT RTEST ...

Start of Statistics at 2015-12-21 05:26:32.


Integrated Replicat Statistics:

        Total transactions                                 1.00
        Redirected                                         0.00
        DDL operations                                     0.00
        Stored procedures                                  0.00
        Datatype functionality                             0.00
        Event actions                                      0.00
        Direct transactions ratio                          0.00%

Replicating from SYSTEM.TEST_OGG to SYSTEM.TEST_OGG:

*** Latest statistics since 2015-12-21 05:25:33 ***
        Total inserts                                      1.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.



 

From the replicat report file we can see that definition for the TEST_OGG table was obtained via the GoldenGate trail file.

 

2015-12-21 05:25:22  INFO    OGG-06505  MAP resolved (entry SYSTEM.TEST_OGG): MAP "SYSTEM"."TEST_OGG", TARGET SYSTEM.TEST_OGG.

2015-12-21 05:25:33  INFO    OGG-02756  The definition for table SYSTEM.TEST_OGG is obtained from the trail file.

By using the logdump utility we can view the Database Definition Record (DDR) as well as Table Definition Record (TDR) information contained in the trail file.

DDR Version: 1
Database type: ORACLE
Character set ID: we8iso8859p1
National character set ID: UTF-16
Locale: neutral
Case sensitivity: 14 14 14 14 14 14 14 14 14 14 14 14 11 14 14 14
TimeZone: GMT-07:00
Global name: BSSTG

2015/12/21 05:25:18.534.893 Metadata             Len 277 RBA 1541
Name: SYSTEM.TEST_OGG
*
 1)Name          2)Data Type        3)External Length  4)Fetch Offset      5)Scale         6)Level
 7)Null          8)Bump if Odd      9)Internal Length 10)Binary Length    11)Table Length 12)Most Sig DT
13)Least Sig DT 14)High Precision  15)Low Precision   16)Elementary Item  17)Occurs       18)Key Column
19)Sub DataType 20)Native DataType 21)Character Set   22)Character Length 23)LOB Type     24)Partial Type
*
TDR version: 1
Definition for table SYSTEM.TEST_OGG
Record Length: 108
Columns: 3

EMP_ID       64     50        0  0  0 1 0     50     50     50 0 0 0 0 1    0 1   2    2       -1      0 0 0
FIRST_NAME   64     20       56  0  0 1 0     20     20      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
LAST_NAME    64     20       82  0  0 1 0     20     20      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
End of definition


GoldenGate 12.2 supports INVISIBLE columns

$
0
0

Oracle Goldengate 12.2 now provides support for replication of tables with INVISIBLE columns which was not possible in earlier releases.

Let us look at an example.

We create a table on both the source as well as target databases with both an INVISIBLE and VIRTUAL column COMMISSION.

SQL>  create table system.test_ogg
  2   (empid number, salary number, commission number INVISIBLE generated always as (salary * .05) VIRTUAL );

Table created.

SQL>  alter table system.test_ogg
  2   add constraint pk_test_ogg primary key (empid);

Table altered.


Note that the column is not visible until we use the SET COLINVISIBLE ON command in SQL*PLUS.

SQL> desc  system.test_ogg
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMPID                                              NUMBER
 SALARY   


SQL> SET COLINVISIBLE ON

SQL> desc  system.test_ogg
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMPID                                              NUMBER
 SALARY                                             NUMBER
 COMMISSION (INVISIBLE)                             NUMBER

We now insert a row into the TEST_OGG table.

The value for the INVISIBLE and VIRTUAL column is derived based on the value of the SALARY column.

Note that the SELECT * command will not display the invisible column COMMISSION.

SQL> insert into system.test_ogg
  2  values
  3   (1001, 10000);

1 row created.

SQL> commit;

Commit complete.



SQL> select empid,salary,commission from system.test_ogg;

     EMPID     SALARY COMMISSION
---------- ---------- ----------
      1001      10000        500


SQL> select * from system.test_ogg;

     EMPID     SALARY
---------- ----------
      1001      10000

On the target GoldenGate environment we can see that the table structure information was contained and derived from the trail files as now in 12.2 table metadata is contained in the self-describing trail files and the parameters SOURCEDEFS or ASSUMETARGETDEFS are now no longer required in case the source and target database tables differ in structure.

2015-12-25 07:53:07  INFO    OGG-02756  The definition for table SYSTEM.TEST_OGG is obtained from the trail file.
Skipping invisible column COMMISSION in default map.
2015-12-25 07:53:07  INFO    OGG-06511  Using following columns in default map by name: EMPID, SALARY.

2015-12-25 07:53:07  INFO    OGG-06510  Using the following key columns for target table SYSTEM.TEST_OGG: EMPID.

On the target database we can see that the row has been replicated and the invisible column COMMISSION has been populated as well.

SQL> select empid,salary,commission from system.test_ogg;

     EMPID     SALARY COMMISSION
---------- ---------- ----------
      1001      10000        500

Tuning Integrated Replicat performance using EAGER_SIZE parameter

$
0
0

Is Oracle GoldenGate really designed for batch processing or “large” transactions? – not sure what the official Oracle take on this is but I would hazard a guess and say maybe no. Maybe that is something better suited to an ETL type of product like Oracle Data Integrator.

Goldengate considers a transaction to be large if it changes more than 15100 rows in a table (changed in version 12.2. It used to a value of 9500 in earlier versions)

An important parameter enforces how Goldengate applies these “large” transactions. It is called EAGER_SIZE.

In essence for Oracle GoldenGate it means when I see a large number of LCR’s in a transaction, do I start applying them straight away (that I guess is where the “eager” part of the parameter name is derived from) or do I wait for the entire transaction to be committed and only then start applying changes.

This “waiting” seems to serialize the apply process and adds to the apply lag on the target in a big way.

We can see from the test case (2) shown below, the apply lag more than doubled.

To illustrate this let us run a series of tests involving replication with source and target Oracle GoldenGate 12.2 environments located over 3000 KM from each other.

The test involves running a procedure which executes a series of INSERT and DELETE statements on a set of 10 tables. The load procedure generates 200 transactions which are executed in a 30 second period on the source database. These 200 transactions change in total over 2 million rows across the 10 tables.

Test 1) Maximum size of transaction is 10,000 rows

Test 2) Maximum size of transaction is 20,000 rows (EAGER_SIZE default value)

Test 3) Maximum size of transaction is 20,00 rows (EAGER_SIZE increased to 25000)

 
Apply Lag on the target database:
 

Test 1) ~ 20 seconds
Test 2) ~ 50 seconds
Test 3) ~ 20 seconds

 

Test 1

Note the maximum number of rows in a single transaction in this case is 10,000.

This is the code we are using in the procedure to generate the load test.

create or replace procedure sysadm.load_gen
IS
BEGIN
FOR i in 1 .. 10
LOOP
delete sysadm.myobjects1;
commit;
delete sysadm.myobjects2;
commit;
…
…

delete sysadm.myobjects10;
commit;
insert into sysadm.myobjects1
select * from all_objects where rownum < 10001;
commit;
insert into sysadm.myobjects2
select * from all_objects where rownum < 10001;
commit;

…..
…..
….

insert into sysadm.myobjects10
select * from all_objects where rownum < 10001;
commit;
end loop;
END;
/

When we kick off the load procedure in each of the 3 test cases on the source, we will see that for about 30 seconds all Apply Servers are idle.

So what is happening in this time?

• On source database the Log Mining server mines the redo log files, extract changes in the form of Logical Change Records which are then passed onto the Extract process which then writes then to the GoldenGate trail files.

• Trail files sent by the Extract Pump over the network to target

• Once trail files are received on the target server, the Replicat process will read the trail file and construct Logical Change Records.

• These LCR’s are sent to the target database where the Log Mining server will start various Apply processes – like the Receiver to receive the LCR’s , the Preparer and Co-ordinator which will sort transactions and organize them in terms of Primary and Foreign key dependencies and finally the Apply Server process which applies changes to the database.

Initially we see the Apply Server has started 8 individual processes because we set the PARALLELISM parameter to 8

SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         1 IDLE                                      0
         2 IDLE                                      0
         3 IDLE                                      0
         4 IDLE                                      0
         5 IDLE                                      0
         6 IDLE                                      0
         7 IDLE                                      0
         8 IDLE                                      0

Once the Apply Server detects additional load coming in, it will spawn additional processes on the fly. This is a big advantage of using Integrated Replicat over Classic or Co-ordinated replicat in that it is load aware and we do not have to manually allocate the number of Apply Servers or have to map an Apply Server to a table or set of target tables.

Note after a few seconds the Apply Servers start applying the received change and we now have the 9th Apply processe added to the earlier 8.

SQL> /

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         1 IDLE                                  50005
         2 IDLE                                  20002
         3 IDLE                                  30003
         4 IDLE                                      0
         5 IDLE                                      0
         6 IDLE                                      0
         7 IDLE                                      0
         8 IDLE                                      0

9 rows selected.

SQL> /

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         1 IDLE                                  50005
         2 IDLE                                  20002
         3 IDLE                                  30003
         4 IDLE                                      0
         5 IDLE                                      0
         6 IDLE                                      0
         7 IDLE                                      0
         8 IDLE                                      0

9 rows selected.


From the view V$GG_APPLY_SERVER we can see the state ‘EXECUTE TRANSACTION’ which shows Apply Servers are applying transactions in parallel.

 
SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         1 IDLE                                 140014
         2 EXECUTE TRANSACTION                  302634
         3 IDLE                                 270027
         4 EXECUTE TRANSACTION                  182775
         5 IDLE                                  60006
         6 EXECUTE TRANSACTION                  130013
         7 IDLE                                      0
         8 IDLE                                      0



SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         0 IDLE                                      0
         1 EXECUTE TRANSACTION                  187834
         2 EXECUTE TRANSACTION                  487708
         3 IDLE                                 330033
         4 EXECUTE TRANSACTION                  537853
         5 EXECUTE TRANSACTION                  177838
         6 EXECUTE TRANSACTION                  267948
         7 IDLE                                      0
         8 IDLE                                      0


Finally we see all the servers are idle - TOTAL_MESSAGES_APPLIED are about 2 million which is about equal to the number of rows changed.

Also note an additional (10th) apply server was also started while the Apply Server was applying changes to the target.

SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
        10 IDLE                                      0
         2 IDLE                                 360036
         3 IDLE                                 180018
         4 IDLE                                 280028
         9 INACTIVE                                  0
         5 IDLE                                 410041
         6 IDLE                                 200022
         1 IDLE                                 340034
         7 IDLE                                 220022
         8 IDLE                                  10001

 


Test 2

Now we run the same load test.

While the number of transactions and number of rows being changed remains the same, we have increased the number of rows in a single transaction to 20,000 (from earlier 10,000).

So we change the procedure code as shown below and reduce the number of iterations in the loop from 10 to 5 to keep the volume of rows changed the same as before.

insert into sysadm.myobjects1
select * from all_objects where rownum < 10001;
commit;

TO

insert into sysadm.myobjects1
select * from all_objects where rownum < 20001;
commit;

Now we can see that at any given time only one Apply server is in a state of Execute Transaction – all the rest are idle or in state of WAIT DEPENDENCY or sometimes we will also see the state WAIT FOR NEXT CHUNK.

If we query the database performance views or Top Activity performance page in OEM or ASH Analytics as shown below, we will see the Wait Event REPL: Apply Dependency showing up.


 

We can see that the Apply Server process of the Integrated Replicat RBSPRD1 is what is responsible mainly for that particular Wait Event.


&nmsp;

 

SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         8 IDLE                                      0
         9 IDLE                                      0
        10 IDLE                                      0
         1 WAIT DEPENDENCY                      450026
         2 EXECUTE TRANSACTION                  229333
         3 WAIT DEPENDENCY                      460025
         4 IDLE                                 340017
         5 WAIT DEPENDENCY                      220012
         6 IDLE                                      0
         7 IDLE                                      0

SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         8 IDLE                                      0
         9 IDLE                                      0
        10 IDLE                                      0
         1 EXECUTE TRANSACTION                  455418
         2 WAIT DEPENDENCY                      230014
         3 WAIT DEPENDENCY                      460025
         4 IDLE                                 340017
         5 IDLE                                 240012
         6 IDLE                                      0
         7 IDLE                                      0


SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         8 IDLE                                      0
         9 IDLE                                      0
        10 IDLE                                      0
         1 WAIT DEPENDENCY                      470027
         2 WAIT DEPENDENCY                      230014
         3 EXECUTE TRANSACTION                  476575
         4 IDLE                                 340017
         5 WAIT DEPENDENCY                      240013
         6 IDLE                                      0
         7 IDLE                                      0

 
Test 3

We now run the same load procedure, but we add a new parameter EAGER_SIZE to the replicat parameter file .

Since the size of the biggest transaction is now 20,000 rows we need to set the EAGER_SIZE to a higher value than that.

For example:

DBOPTIONS INTEGRATEDPARAMS(PARALLELISM 8, EAGER_SIZE 25000)

Note that increasing the EAGER_SIZE would put additional memory requirements on the STREAMS_POOL_SIZE.

Now we see that again we have Apply Servers executing transactions in parallel and there are no servers in the state of WAIT DEPENDENCY.

SQL> select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         3 EXECUTE TRANSACTION                  207829
         9 IDLE                                      0
         8 IDLE                                      0
         4 EXECUTE TRANSACTION                       0
         5 IDLE                                      0
         6 IDLE                                      0
         1 EXECUTE TRANSACTION                  227498
         7 IDLE                                      0
         2 EXECUTE TRANSACTION                  160008


SQL> select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         3 EXECUTE TRANSACTION                  227717
         9 IDLE                                      0
         8 IDLE                                      0
         4 EXECUTE TRANSACTION                   67601
         5 IDLE                                      0
         6 IDLE                                      0
         1 EXECUTE TRANSACTION                  268900
         7 IDLE                                      0
         2 EXECUTE TRANSACTION                  308590


Goldengate 12.2 New Feature – Check and validate parameter files using chkprm

$
0
0

In GoldenGate 12.2 we can now validate parameter files before deployment.

There is a new utility called chkprm which can be used for this purpose.’

To run the chkprm utility we provide the name of the parameter file and can optionally indicate what process this parameter file belongs to using the COMPONENT keyword.

Let us look at an example.

 

ors-db-01@oracle:omprd1>./checkprm ./dirprm/eomprd1.prm --COMPONENT EXTRACT

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable ORACLE_HOME=/orasw/app/oracle/product/12.1.0/db_1.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=omprd2.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable TNS_ADMIN=/orasw/app/oracle/product/12.1.0/db_1/network/admin.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable NLS_LANG=AMERICAN_AMERICA.AL32UTF8.

(eomprd1.prm) line 13: Parsing error, [DYNAMICRESOLUTION] is deprecated.

(eomprd1.prm) line 22: Parameter [REPORTDETAIL] is not valid for this configuration.

2016-01-21 21:53:13  INFO    OGG-10139  Parameter file ./dirprm/eomprd1.prm:  Validity check: FAIL.


We can see that this parameter file has failed the validation check because we had used this line in the parameter file and REPORTDETAIL is not supported now in 12.2.

STATOPTIONS REPORTDETAIL, RESETREPORTSTATS

We changed the parameter file to include

STATOPTIONS RESETREPORTSTATS

and now run the chkprm utility again. We now see that the verification of the parameter file has completed successfully.


ors-db-01@oracle:BSSTG1>./checkprm ./dirprm/eomprd1.prm

2015-11-18 19:29:45  INFO    OGG-10139  Parameter file ./dirprm/eomprd1.prm:  Validity check: PASS.

Runtime parameter validation is not reflected in the above check.


Oracle GoldenGate 12.2 New Feature – Integration with Oracle Datapump

$
0
0

In earlier versions when we had to do an Oracle database table instantiation or initial load, we had to perform a number of steps – basically to handle DML changes which were occurring on the source table while the export was in progress.

So we had to first ensure that there were no open or long running transactions in progress. Then obtain the Current SCN of the database – pass this SCN to the FLASHBACK_SCN parameter of the Export Datapump. Then after the import was over we had to ensure that we used the HANDLECOLLISIONS parameter initially for the replicat and also start the Replicat from a particular position in the trail using the AFTERCSN parameter.

Now with Goldengate 12.2, there is tighter integration with Oracle Datapump Export and Import.

The ADD SCHEMATRANDATA command with the PREPARECSN parameter will ensure that the Datapump export will have information about the instantiation CSN’s for each table part of the export – this will populate the system tables and views with instantiation CSNs on the import and further the new Replicat parameter DBOPTIONS ENABLE_INSTANTIATION_FILTERING will filter out DML and DDL records based on the table’s instantiation CSN.

Let us look at an example of this new 12.2 feature.

We have a table called TESTME in the SYSADM schema which initially has 266448 rows.

Before running the Datapump export, let us ‘prepare’ the tables via the PREPARECSN parameter of the ADD SCHEMATRANDATA command.

GGSCI (pcu008 as oggsuser@BSDIT1) 12> add schematrandata sysadm preparecsn
2015-12-10 06:38:58 INFO OGG-01788 SCHEMATRANDATA has been added on schema sysadm.
2015-12-10 06:38:58 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema sysadm.
2015-12-10 06:38:59 INFO OGG-10154 Schema level PREPARECSN set to mode NOWAIT on schema sysadm.

GGSCI (pcu008 as oggsuser@omqat41) 3> info schematrandata SYSADM
2015-12-13 07:21:55 INFO OGG-06480 Schema level supplemental logging, excluding non-validated keys, is enabled on schema SYSADM.
2015-12-13 07:21:55 INFO OGG-01980 Schema level supplemental logging is enabled on schema SYSADM for all scheduling columns.
2015-12-13 07:21:55 INFO OGG-10462 Schema SYSADM have 571 prepared tables for instantiation.

We run the Datapump export. Note the line :

“FLASHBACK automatically enabled to preserve database integrity.”

pcu008@oracle:BSSTG1>expdp directory=BACKUP_DUMP_DIR dumpfile=testme.dmp tables=sysadm.testme
Export: Release 12.1.0.2.0 – Production on Mon Jan 25 23:45:27 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bitProduction
With the Partitioning, Real Application Clusters, Automatic Storage Management,OLAP,
Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYS”.”SYS_EXPORT_TABLE_01?: sys/******** AS SYSDBA directory=BACKUP_DUMP_DIR dumpfile=testme.dmp tables=sysadm.testme
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 28 MB
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “SYSADM”.”TESTME” 26.86 MB 266448 rows
Master table “SYS”.”SYS_EXPORT_TABLE_01? successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/home/oracle/backup/testme.dmp
Job “SYS”.”SYS_EXPORT_TABLE_01? successfully completed at Mon Jan 25 23:46:45 2016 elapsed 0 00:00:49

While the export of the TESTME table is in progress, we will insert 29622 more rows into the table. The table will now have 296070 rows.

SQL> insert into sysadm.testme select * from dba_objects;
29622 rows created.

SQL> select count(*) from sysadm.testme;
COUNT(*)
———-
296070

SQL> commit;
Commit complete.

We perform the import on the target database next. Note the number of rows imported. So we do not have the 29622 rows which were inserted into the table while export is in progress.

qat408@oracle:BSSTG1>impdp directory=BACKUP_DUMP_DIR dumpfile=testme.dmp full=y
Import: Release 12.1.0.2.0 – Production on Mon Jan 25 23:51:42 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bitProduction
With the Partitioning, Real Application Clusters, Automatic Storage Management,OLAP,
Advanced Analytics and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01? successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYS”.”SYS_IMPORT_FULL_01?: sys/******** AS SYSDBA directory=BACKUP_DUMP_DIR dumpfile=testme.dmp full=y
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SYSADM”.”TESTME” 26.86 MB 266448 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYS”.”SYS_IMPORT_FULL_01? successfully completed at Mon Jan 25 23:52:22 2016 elapsed 0 00:00:25

We start the Replicat process on the target – note we are not positioning the replicat liked we used to do earlier using the AFTERCSN command.


GGSCI (qat408 as oggsuser@BSSTG2) 7> start rbsstg1
Sending START request to MANAGER …
REPLICAT RBSSTG1 starting

After starting the replicat, if we look at the report file for the replicat, we can see that the Replicat process is aware of the SCN or CSN number existing in the database while the export was in progress and it knows that any DML or DDL changes post that SCN now need to be applied on the target table.

2016-01-25 23:56:59 INFO OGG-10155 Instantiation CSN filtering is enabled on table SYSADM.TESTME at CSN 402,702,624.

If we query the replicat statistics a while after the replicat has started, we can see that the replicat has applied the insert statement (29622 rows) which was running while the export of the table was in progress.

GGSCI (qat408 as oggsuser@BSSTG1) 12> stats rbsstg1 latest

Sending STATS request to REPLICAT RBSSTG1 …

Start of Statistics at 2016-01-26 00:14:55.

Integrated Replicat Statistics:

Total transactions 1.00
Redirected 0.00
DDL operations 0.00
Stored procedures 0.00
Datatype functionality 0.00
Event actions 0.00
Direct transactions ratio 0.00%

Replicating from SYSADM.TESTME to SYSADM.TESTME:

*** Latest statistics since 2016-01-26 00:05:19 ***
Total inserts 29622.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 29622.00

End of Statistics.



How to configure high availability for Oracle GoldenGate on Exadata

$
0
0

This note describes the procedure used to configure high availability for Oracle GoldenGate 12.2 on Oracle Database Machine (Exadata X5-2) using Oracle Database File System (DBFS), Oracle Clusterware and Oracle Grid Infrastructure Agent.

 

The note also describes how we can create different DBFS file systems on the same Exadata compute node if we would like to host a number of different environments like development,test or staging on same Exadata box and would like to have different GoldenGate software installations for each environment.

Read the note …..
 

GoldenGate INSERTALLRECORDS and OGG-01154 SQL error 1400

$
0
0

The Goldengate INSERTALLRECORDS commands can be used in cases where the requirement is to have on the target database a transaction history or change data capture (CDC) tables which will keep a track of changes a table undergoes at the row level.

So every INSERT, UPDATE or DELETE statement on the source tables is captured as INSERT statements on the target database

But in certain cases update statements issued on the source database can cause the replicat process to abend with an error:

“ORA-01400: cannot insert NULL”.

This can happen when the table has not null columns that have not been updated and when the update is converted to an insert, the trail file will not have values for those columns so the insert will use nulls and consequently fail with the ORA-1400 error.


Test Case

We create two tables – SYSTEM.MYTABLES in the source database and SYSTEM.MYTABLES_CDC in the target database.

The SYSTEM.MYTABLES_CDC table in the target will have two additional columns for maintaining the CDC or Transaction history – OPER_TYPE capture the type of DML operation on the table and CHANGE_DATE which will capture the timestamp information of when the change took place.

We create a Primary Key constraint on the source table – note, the target table will have no similar constraints as rows will be inserted all the time into the CDC table regardless of whether the DML statement on the source was an INSERT, UPDATE or DELETE.

SQL> create table system.mytables
  2  (owner VARCHAR2(30) NOT NULL,
  3   table_name VARCHAR2(30) NOT NULL,
  4  tablespace_name VARCHAR2(30) NOT NULL,
  5  logging VARCHAR2(3) NOT NULL);

Table created.

SQL> alter table system.mytables add constraint pk_mytables primary key (owner,table_name);

Table altered.


SQL SYS@euro> create table system.mytables_cdc
  2  (owner VARCHAR2(30) NOT NULL,
  3    table_name VARCHAR2(30) NOT NULL,
  4  tablespace_name VARCHAR2(30) NOT NULL,
  5  logging VARCHAR2(3) NOT NULL,
  6  oper_type VARCHAR2(20),
  7  change_date TIMESTAMP);

Table created.

We now issue the ADD TRANDATA GGSCI command.

Note issuing the ADD TRANDATA command will enable supplemental logging at the table level for PK columns, UK columns and FK columns – not ALL columns.



GGSCI (ogg2.localdomain as oggsuser@sourcedb) 64> dblogin useridalias oggsuser_sourcedb
Successfully logged into database.

GGSCI (ogg2.localdomain as oggsuser@sourcedb) 65> add trandata system.mytables

Logging of supplemental redo data enabled for table SYSTEM.MYTABLES.
TRANDATA for scheduling columns has been added on table 'SYSTEM.MYTABLES'.
GGSCI (ogg2.localdomain as oggsuser@sourcedb) 66> info trandata system.mytables

Logging of supplemental redo log data is enabled for table SYSTEM.MYTABLES.

Columns supplementally logged for table SYSTEM.MYTABLES: OWNER, TABLE_NAME.


We can query the DBA_LOG_GROUPS view to get information about the supplemental logging added for the the table MYTABLES.

The ADD TRANDATA command has created a supplmental log group called GGS_729809 and we can see that supplemental logging is enabled for all columns part of a primary key, unique key or foreign key constraint.


SQL> SELECT
  2  LOG_GROUP_NAME,
  3   TABLE_NAME,
  4  DECODE(ALWAYS, 'ALWAYS', 'Unconditional','CONDITIONAL', 'Conditional') ALWAYS,
  5  LOG_GROUP_TYPE
  6  FROM DBA_LOG_GROUPS
  7   WHERE TABLE_NAME='MYTABLES' AND OWNER='SYSTEM';

no rows selected

SQL> /

                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
GGS_72909            MYTABLES        Unconditional  USER LOG GROUP
SYS_C009814          MYTABLES        Unconditional  PRIMARY KEY LOGGING
SYS_C009815          MYTABLES        Conditional    UNIQUE KEY LOGGING
SYS_C009816          MYTABLES        Conditional    FOREIGN KEY LOGGING



SQL> select LOG_GROUP_NAME,COLUMN_NAME from DBA_LOG_GROUP_COLUMNS
  2  where OWNER='SYSTEM' and TABLE_NAME='MYTABLES'
  3  order by 1,2;


Log Group            COLUMN_NAME
-------------------- ------------------------------
GGS_72909            OWNER
GGS_72909            TABLE_NAME


Let us now test the case.

We insert some rows into the source table MYTABLES – these rows are replicated fine to the target table MYTABLES_CDC.


SQL> insert into system.mytables
  2  select OWNER,TABLE_NAME,TABLESPACE_NAME,LOGGING
  3   from DBA_TABLES
  4   where OWNER='SYSTEM' and TABLESPACE_NAME is NOT NULL;

110 rows created.

SQL> commit;

Commit complete.



SQL SYS@euro> select count(*) from system.mytables_cdc;

  COUNT(*)
----------
       110


Let us now see what happens when we run an UPDATE statement on the source database. Note the columns involved in the UPDATE are not PK or UK columns.


SQL> update system.mytables set tablespace_name='USERS' where tablespace_name='SYSTEM';

89 rows updated.

SQL> commit;

Commit complete.


Immediately we will see that the Replicat process on the target has ABENDED an d if we examine the Replicat report log we can see the error message as shown below.

2016-06-25 14:40:26  INFO    OGG-06505  MAP resolved (entry SYSTEM.MYTABLES): MAP "SYSTEM"."MYTABLES", TARGET SYSTEM.MYTABLES_CDC, COLMAP (USEDEFAULTS, CHANGE_DATE=@GETENV ('GGHEADER', 'COM
MITTIMESTAMP'), OPER_TYPE=@GETENV ('GGHEADER', 'OPTYPE')).

2016-06-25 14:40:46  WARNING OGG-06439  No unique key is defined for table MYTABLES_CDC. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may
be used to define the key.
Using the following default columns with matching names:
  OWNER=OWNER, TABLE_NAME=TABLE_NAME, TABLESPACE_NAME=TABLESPACE_NAME, LOGGING=LOGGING

2016-06-25 14:40:46  INFO    OGG-06510  Using the following key columns for target table SYSTEM.MYTABLES_CDC: OWNER, TABLE_NAME, TABLESPACE_NAME, LOGGING, OPER_TYPE, CHANGE_DATE.


2016-06-25 14:45:18  WARNING OGG-02544  Unhandled error (ORA-26688: missing key in LCR) while processing the record at SEQNO 7, RBA 19037 in Integrated mode. REPLICAT will retry in Direct m
ode.

2016-06-25 14:45:18  WARNING OGG-01154  SQL error 1400 mapping SYSTEM.MYTABLES to SYSTEM.MYTABLES_CDC OCI Error ORA-01400: cannot insert NULL into ("SYSTEM"."MYTABLES_CDC"."LOGGING") (statu
s = 1400), SQL .


There is a column called LOGGING which is a NOT NULL column – the GoldenGate trail file has information about the other columns – OWNER, TABLE_NAME and TABLESPACE_NAME.

But there is no data captured in the trail file for the LOGGING column.

Using the LOGDUMP utility we can see this.

Logdump 103 >open ./dirdat/rt000007
Current LogTrail is /ogg/euro/dirdat/rt000007
Logdump 104 >ghdr on
Logdump 105 >detail on
Logdump 106 >detail data
Logdump 107 >pos 32008
Reading forward from RBA 32008
Logdump 108 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    52  (x0034)   IO Time    : 2016/06/25 14:45:02.999.764
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x02)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         67       AuditPos   : 8056764
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/06/25 14:45:02.999.764 FieldComp            Len    52 RBA 32008
Name: SYSTEM.MYTABLES
After  Image:                                             Partition 4   G  e
 0000 000a 0000 0006 5359 5354 454d 0001 0015 0000 | ........SYSTEM......
 0011 4c4f 474d 4e52 5f50 4152 414d 4554 4552 2400 | ..LOGMNR_PARAMETER$.
 0200 0900 0000 0555 5345 5253                     | .......USERS
Column     0 (x0000), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     1 (x0001), Len    21 (x0015)
 0000 0011 4c4f 474d 4e52 5f50 4152 414d 4554 4552 | ....LOGMNR_PARAMETER
 24                                                | $
Column     2 (x0002), Len     9 (x0009)
 0000 0005 5553 4552 53                            | ....USERS


The table has not null columns that have not been updated (column LOGGING was not part of the update statement).

If the column was not updated on the update statement, when the update is converted to an insert, the trail file will not have values for that column and so the insert will use nulls and consequently fail with the ORA-1400, so this is an expected behavior.

We can see that the update on source database is converted into an insert statement on the target – this is because of the INSERTALLRECORDS parameter we are using in the Replicat parameter file.

.

So the solution is that we need to enable supplemental logging for ALL columns at the source database table.

We will now add supplemental log data to all columns

SQL> alter table system.mytables add supplemental log data (ALL) columns;

Table altered.

Note the DBA_LOG_GROUPS view as well as the ADD TRANDATA command now shows all the columns have supplemental logging enabled.


SELECT
 LOG_GROUP_NAME,
  TABLE_NAME,
 DECODE(ALWAYS, 'ALWAYS', 'Unconditional','CONDITIONAL', 'Conditional') ALWAYS,
 LOG_GROUP_TYPE
  FROM DBA_LOG_GROUPS
  WHERE TABLE_NAME='MYTABLES' AND OWNER='SYSTEM';
SQL>   2    3    4    5    6    7
                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
GGS_72909            MYTABLES        Unconditional  USER LOG GROUP
SYS_C009814          MYTABLES        Unconditional  PRIMARY KEY LOGGING
SYS_C009815          MYTABLES        Conditional    UNIQUE KEY LOGGING
SYS_C009816          MYTABLES        Conditional    FOREIGN KEY LOGGING
SYS_C009817          MYTABLES        Unconditional  ALL COLUMN LOGGING


GGSCI (ogg2.localdomain as oggsuser@sourcedb) 12> info trandata system.mytables

Logging of supplemental redo log data is enabled for table SYSTEM.MYTABLES.

Columns supplementally logged for table SYSTEM.MYTABLES: ALL.


SQL> alter system switch logfile;

System altered.

Note: STOP and RESTART the Extract and Pump

Note the position where the Extract pump was writing to.

GGSCI (ogg2.localdomain as oggsuser@sourcedb) 28> info pext1 detail

EXTRACT    PEXT1     Last Started 2016-06-25 15:04   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           31081
Log Read Checkpoint  File ./dirdat/lt000012
                     2016-06-25 15:05:16.927851  RBA 1476

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/rt                                          9       1522        100 RMTTRAIL


Delete and recreate the Integrated Replicat

GGSCI (ogg1.localdomain as oggsuser@euro) 2> delete replicat rep2

2016-06-25 15:07:11  WARNING OGG-02541  Replicat could not process some SQL errors before being dropped or unregistered. This may cause the data to be out of sync.

2016-06-25 15:07:14  INFO    OGG-02529  Successfully unregistered REPLICAT REP2 inbound server OGG$REP2 from database.
Deleted REPLICAT REP2.


GGSCI (ogg1.localdomain as oggsuser@euro) 3> add replicat rep2 integrated exttrail ./dirdat/rt
REPLICAT (Integrated) added.

Restart the replicat from the point where it had abended

GGSCI (ogg1.localdomain as oggsuser@euro) 4> alter rep2 extseqno 9 extrba 1522

2016-06-25 15:07:55  INFO    OGG-06594  Replicat REP2 has been altered through GGSCI. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start REP2 with NOFILTERDUPTRANSACTION option.

REPLICAT (Integrated) altered.

Now run a similar update statement which earlier had caused the replicat to abend


SQL> update system.mytables set tablespace_name='SYSTEM'  where tablespace_name='USERS';

89 rows updated.

SQL> commit;

Commit complete.

We can see that this time the replicat has successfully applied the changes on the target table – 89 rows which were updated on the source table have now been transformed into 89 INSERT statements in the CDC table on the target database.

GGSCI (ogg1.localdomain as oggsuser@euro) 14> stats replicat rep2 table SYSTEM.MYTABLES_CDC latest

Sending STATS request to REPLICAT REP2 ...

Start of Statistics at 2016-06-25 15:11:59.

.....
......


Replicating from SYSTEM.MYTABLES to SYSTEM.MYTABLES_CDC:

*** Latest statistics since 2016-06-25 15:11:09 ***
        Total inserts                                     89.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                  89.00

End of Statistics.

If we now examine the trail file on the target, we can see that this time all the table columns including the LOGGING column (which was missing earlier) has been captured in the trail file

Logdump 109 >open ./dirdat/rt000009
Current LogTrail is /ogg/euro/dirdat/rt000009
Logdump 110 >ghdr on
Logdump 111 >detail on
Logdump 112 >detail data
Logdump 113 >pos 1522
Reading forward from RBA 1522
Logdump 114 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    56  (x0038)   IO Time    : 2016/06/25 15:10:52.999.941
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x00)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         68       AuditPos   : 186384
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/06/25 15:10:52.999.941 FieldComp            Len    56 RBA 1522
Name: SYSTEM.MYTABLES
After  Image:                                             Partition 4   G  b
 0000 000a 0000 0006 5359 5354 454d 0001 000d 0000 | ........SYSTEM......
 0009 4d59 4f42 4a45 4354 5300 0200 0a00 0000 0653 | ..MYOBJECTS........S
 5953 5445 4d00 0300 0700 0000 0359 4553           | YSTEM........YES
Column     0 (x0000), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     1 (x0001), Len    13 (x000d)
 0000 0009 4d59 4f42 4a45 4354 53                  | ....MYOBJECTS
Column     2 (x0002), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     3 (x0003), Len     7 (x0007)
 0000 0003 5945 53                                 | ....YES

Note the data in the CDC table on the target

SQL SYS@euro>  select tablespace_name,oper_type from system.mytables_cdc
  2   where TABLE_NAME ='MYTABLES';

TABLESPACE_NAME                OPER_TYPE
------------------------------ --------------------
SYSTEM                         INSERT
USERS                          SQL COMPUPDATE
SYSTEM                         SQL COMPUPDATE

Installing the Oracle GoldenGate monitoring plug-in (13.2.1.0.0) for Cloud Control 13c Release 2

Installing and Configuring Oracle GoldenGate Monitor 12c (12.1.3.0)

$
0
0

GoldenGate Monitor is a web-based monitoring console that provides a real-time graphical overview of all the Oracle GoldenGate instances in our enterprise.

We can view statistics and alerts as well as monitor the performance of all the related GoldenGate components in all environments in our enterprise from a single console.

GoldenGate Monitor can also send alert messages to e-mail and SNMP clients.

This note describes the steps involved in installing and configuring the Oracle GoldenGate 12c Monitor Server and Monitor Agent.

At a high level, these are the different steps:

  • Install JDK 1.7
  • Install Fusion Middleware Infrastructure 12.1.3.0 which will also install Web Logic Server 12.1.3
  • From the Fusion Middleware Infrastructure home run the Repository Creation Utility (RCU) to create an Oracle GoldenGate Monitor-specific repository in an Oracle database.
  • Install Oracle GoldenGate Monitor Server (and optionally Monitor Agent)
  • Create the WebLogic Domain for GoldenGate Monitor
  • Edit the monitor.properties file
  • Configure boot.properties file for WebLogic Admin and Managed Servers
  • Start the WebLogic Admin and Managed Servers
  • Create the GoldenGate Monitor Admin user via the WebLogic Console and grant the user the appropriate roles
  • Install the Oracle GoldenGate Monitor Agent on the target hosts with running GoldenGate environments which we want to monitor
  • Configure the Monitor Agent and edit Config.properties file

Download the note ….

GoldenGate Active-Active Replication with Conflict Detection and Resolution (CDR) – Part 3

$
0
0

In the earlier post we saw a case of GoldenGate Conflict Resolution using the Trusted Site Or Trusted Source method where one site is dedicated as the trusted or master site and in a CDR scenario will always prevail over other sites participating in the Active-Active Replication.

We saw how

You need to be logged in to see this part of the content. Please Login to access.
Viewing all 80 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>