Quantcast
Channel: GoldenGate – Oracle DBA – Tips and Techniques
Viewing all articles
Browse latest Browse all 80

Oracle GoldenGate Tutorial 10- performing a zero downtime cross platform migration and 11g database upgrade

$
0
0

This note briefly describes the steps required to perform a cross platform database migration (AIX to Red Hat Linux) and also a database upgrade from 10g to 11g Release 2 which is attained with zero downtime using a combination of RMAN, Cross Platform TTS and GoldenGate to achieve the same.

This is the environment that we will be referring to in this note:

10..2.0.4 Database on AIX – DB10g
10.2.0.4 Duplicate database on AIX – Clonedb
11.2 database on Linux – DB11g

Steps

1) Create the GoldenGate Extract process on source AIX DB10g and start the same. This extract process will be capturing changes as they occur on the 10g AIX database in the remote trail files located on the Linux target system. Since the Replicat process is not running on the target at this time, the source database changes will accumulate in the extract trail files.

GGSCI (devu026) 12> add extract myext, tranlog, begin now
EXTRACT added.

GGSCI (devu026) 13> add rmttrail /u01/oracle/ggs/dirdat/my, extract myext
RMTTRAIL added.

GGSCI (devu026) 14> edit params myext

“/u01/rapmd2/ggs/dirprm/myext.prm” 7 lines, 143 characters
EXTRACT myext
USERID ggs_owner, PASSWORD ggs_owner
SETENV (ORACLE_HOME = “/u01/oracle/product/10.2/rapmd2″)
SETENV (ORACLE_SID = “db10g”)
RMTHOST 10.1.210.35, MGRPORT 7809
RMTTRAIL /u01/oracle/ggs/dirdat/my
DISCARDFILE discard.txt, APPEND
TABLE sh.*;
TABLE hr.*;
TABLE pm.*;
TABLE oe.*;
TABLE ix.*;

START THE EXTRACT PROCESS NOW

GGSCI (devu026) 16> START EXTRACT MYEXT

Sending START request to MANAGER …
EXTRACT MYEXT starting

GGSCI (devu026) 17> INFO EXTRACT MYEXT

EXTRACT MYEXT Last Started 2010-03-04 08:42 Status RUNNING
Checkpoint Lag 00:31:07 (updated 00:00:01 ago)
Log Read Checkpoint Oracle Redo Logs
2010-03-04 08:11:26 Seqno 8, RBA 2763280

2) Using RMAN create a duplicate database in the source AIX environment (Clonedb) – this database will be used as the source for the export of database structure (no rows export) and tablespace meta data

Follow this white paper to get all the steps involved.

***********ON SOURCE – UPDATE 1**********

SQL> conn sh/sh
Connected.
SQL> update mycustomers set cust_city=’Singapore’;

55500 rows updated.

SQL> commit;

Commit complete.

3) Create a skeleton database on the Linux platform in the 11g Release 2 environment – DB11g

Note – we will then set up the GoldenGate user GGS_OWNER in the database and grant it the required privileges as well as create the checkpoint table. Read one of the earlier tutorials which details the set up of the GGS_OWNER user in the database.

4) Take a full export of the database without any table data to get just the structure of the database – this is now taken from the clonedb duplicate database created in step 2

db10g:/u01/oracle> expdp dumpfile=full_norows.dmp directory =dumpdir content=metadata_only exclude=tables,index full=y

Export: Release 10.2.0.4.0 – 64bit Production on Thursday, 04 March, 2010 9:02:44

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYS”.”SYS_EXPORT_FULL_01″: sys/******** AS SYSDBA dumpfile=full_norows.dmp directory =dumpdir content=metadata_only exclude=tables,index full=y
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
…………………
…………………….

5) Import the dumpfile into the 11g database DB11g which has the database structure without the table data – this will create all the users, roles, synonyms etc

We had to create a role and also create the directory before doing the full database import. Ignore he errors during the import as it will pertain to objects which already exist in the scratch database.

SQL> create role xdbwebservices;

Role created.

SQL> create directory dumpdir as ‘/u01/oracle’;

Directory created.

[oracle@redhat346 ~]$ impdp dumpfile=full_norows.dmp directory=dumpdir full=y

Import: Release 11.2.0.1.0 – Production on Mon Mar 8 13:09:16 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

…………
……….

6) On the clonedb database, we now will export the tablespace meta data – make the required tablespaces read only. Note that the original source 10g database is in read write mode and is being accessed by the users with no downtime as yet.

clonedb:/u01/rapmd2/ggs> expdp dumpfile=tts_meta.dmp directory =dumpdir transport_tablespaces=EXAMPLE,TTS

Export: Release 10.2.0.4.0 – 64bit Production on Monday, 08 March, 2010 13:01:38

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″: sys/******** AS SYSDBA dumpfile=tts_meta.dmp directory =dumpdir transport_tablespaces=EXAMPLE,TTS
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/oracle/tts_meta.dmp
Job “SYS”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at 13:02:17

7) Copy the datafiles from the read only tablespaces ( from clonedb) to the target Linux system and using RMAN convert the datafiles from the AIX platform to the Linux platform

RMAN> CONVERT DATAFILE ‘/u01/oracle/example01.dbf’
2> FROM PLATFORM=’AIX-Based Systems (64-bit)’
3> FORMAT ‘/u02/oradata/db11g/example01.dbf’;

Starting conversion at target at 08-MAR-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK
channel ORA_DISK_1: starting datafile conversion
input file name=/u01/oracle/example01.dbf
converted datafile=/u02/oradata/db11g/example01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished conversion at target at 08-MAR-10

RMAN> CONVERT DATAFILE ‘/u01/oracle/tts01.dbf’
2> FROM PLATFORM=’AIX-Based Systems (64-bit)’
3> FORMAT ‘/u02/oradata/db11g/tts01.dbf’;

Starting conversion at target at 08-MAR-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input file name=/u01/oracle/tts01.dbf
converted datafile=/u02/oradata/db11g/tts01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished conversion at target at 08-MAR-10

8) Import the tablespace meta data into the 11g database and plug in the tablespaces -make the tablespaces read write

[oracle@redhat346 ~]$ impdp dumpfile=tts_meta.dmp directory=dumpdir transport_datafiles=”/u02/oradata/db11g/example01.dbf”,”/u02/oradata/db11g/tts01.dbf”

Import: Release 11.2.0.1.0 – Production on Mon Mar 8 13:21:37 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_TRANSPORTABLE_01″: sys/******** AS SYSDBA dumpfile=tts_meta.dmp directory=dumpdir transport_datafiles=/u02/oradata/db11g/example01.dbf,/u02/oradata/db11g/tts01.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
ORA-31684: Object type TYPE:”PM”.”ADHEADER_TYP” already exists
ORA-31684: Object type TYPE:”PM”.”TEXTDOC_TYP” already exists
ORA-31684: Object type TYPE:”IX”.”ORDER_EVENT_TYP” already exists
ORA-31684: Object type TYPE:”OE”.”PHONE_LIST_TYP” already exists
ORA-31684: Object type TYPE:”OE”.”CUST_ADDRESS_TYP” already exists
ORA-31684: Object type TYPE:”PM”.”TEXTDOC_TAB” already exists
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
…………………..
……………………………..

SQL> alter tablespace tts read write;

Tablespace altered.

SQL> alter tablespace example read write;

Tablespace altered.

***********ON SOURCE – UPDATE 2**********

SQL> conn sh/sh
Connected.
SQL> update mycustomers set cust_city=’Hong Kong’;

55500 rows updated.

SQL> commit;

Commit complete.

Note:

As we make changes in the source database, the trail files on the target start getting populated. These are located in the destination we specified when creating the RMTTRAIL.

[oracle@redhat346 dirdat]$ pwd
/u01/oracle/ggs/dirdat

[oracle@redhat346 dirdat]$ ls -lrt

-rw-rw-rw- 1 oracle oinstall 9999950 Mar 8 09:41 gs000000
-rw-rw-rw- 1 oracle oinstall 9999641 Mar 8 09:41 gs000001
-rw-rw-rw- 1 oracle oinstall 9999629 Mar 8 10:00 gs000003
-rw-rw-rw- 1 oracle oinstall 9999724 Mar 8 10:00 gs000002
-rw-rw-rw- 1 oracle oinstall 9999741 Mar 8 10:00 gs000004
-rw-rw-rw- 1 oracle oinstall 2113226 Mar 8 10:00 gs000005
-rw-rw-rw- 1 oracle oinstall 9999791 Mar 8 10:35 rm000000
-rw-rw-rw- 1 oracle oinstall 9999721 Mar 8 10:35 rm000001
-rw-rw-rw- 1 oracle oinstall 9999249 Mar 8 10:49 rm000003
-rw-rw-rw- 1 oracle oinstall 9999309 Mar 8 10:49 rm000002
-rw-rw-rw- 1 oracle oinstall 9999818 Mar 8 10:49 rm000004
-rw-rw-rw- 1 oracle oinstall 9999430 Mar 8 10:49 rm000005
-rw-rw-rw- 1 oracle oinstall 9999412 Mar 8 10:49 rm000006
-rw-rw-rw- 1 oracle oinstall 9999588 Mar 8 10:54 rm000007
-rw-rw-rw- 1 oracle oinstall 9999481 Mar 8 10:54 rm000009
-rw-rw-rw- 1 oracle oinstall 9999399 Mar 8 10:54 rm000008
-rw-rw-rw- 1 oracle oinstall 9999787 Mar 8 10:54 rm000010
-rw-rw-rw- 1 oracle oinstall 9999770 Mar 8 10:57 rm000011
-rw-rw-rw- 1 oracle oinstall 9999941 Mar 8 10:57 rm000012
-rw-rw-rw- 1 oracle oinstall 9999913 Mar 8 10:57 rm000013
-rw-rw-rw- 1 oracle oinstall 9999429 Mar 8 11:09 rm000014
-rw-rw-rw- 1 oracle oinstall 9999812 Mar 8 11:09 rm000015
-rw-rw-rw- 1 oracle oinstall 9999240 Mar 8 11:09 rm000016
-rw-rw-rw- 1 oracle oinstall 9999454 Mar 8 11:09 rm000017
-rw-rw-rw- 1 oracle oinstall 9999914 Mar 8 11:09 rm000018
-rw-rw-rw- 1 oracle oinstall 9999820 Mar 8 11:16 rm000019
-rw-rw-rw- 1 oracle oinstall 9999766 Mar 8 11:16 rm000020
-rw-rw-rw- 1 oracle oinstall 9999706 Mar 8 12:56 rm000021
-rw-rw-rw- 1 oracle oinstall 9999577 Mar 8 12:56 rm000022
-rw-rw-rw- 1 oracle oinstall 9999841 Mar 8 12:56 rm000023
-rw-rw-rw- 1 oracle oinstall 9999890 Mar 8 13:26 rm000024
-rw-rw-rw- 1 oracle oinstall 9999604 Mar 8 13:26 rm000025
-rw-rw-rw- 1 oracle oinstall 9999536 Mar 8 13:26 rm000026
-rw-rw-rw- 1 oracle oinstall 918990 Mar 8 13:26 rm000027

9) On the target Linux environment now we create and start the GoldenGate Replicat process/processes. They will now start reading from the Extract trail files created in Step 1 and will start applying them to the 11g database.

GGSCI (redhat346.localdomain) 1> add replicat myrep, extrail /u01/oracle/ggs/dirdat/rm
REPLICAT added.

GGSCI (redhat346.localdomain) 6> edit params myrep

REPLICAT myrep
SETENV (ORACLE_HOME = “/u01/app/oracle/product/11.2.0/dbhome_1″)
SETENV (ORACLE_SID = “db11g”)
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
MAP sh.*, TARGET sh.*;
MAP pm.*, TARGET pm.*;
MAP oe.*, TARGET oe.*;
MAP hr.*, TARGET hr.*;
MAP ix.*, TARGET ix.*;

10) Once all the changes in the trail files have been applied by the Replicat process and we confirm that both source and target are in sync (we can use another GoldenGate product called Veridata for this), we can now point the users and application to the 11g Linux database with no or minimal downtime which will depend on the infrastructure.

We can see the Replicat process going through and reading all the trail files until it has completed processing all the files

GGSCI (redhat346.localdomain) 131> info replicat myrep

REPLICAT MYREP Last Started 2010-03-08 13:42 Status RUNNING
Checkpoint Lag 03:07:37 (updated 00:00:17 ago)
Log Read Checkpoint File /u01/oracle/ggs/dirdat/rm000002
2010-03-08 10:35:27.001328 RBA 6056361
…….
………..

GGSCI (redhat346.localdomain) 156> info replicat myrep

REPLICAT MYREP Last Started 2010-03-08 13:42 Status RUNNING
Checkpoint Lag 02:53:49 (updated 00:00:00 ago)
Log Read Checkpoint File /u01/oracle/ggs/dirdat/rm000007
2010-03-08 10:49:39.001103 RBA 2897635

………………
……………..

GGSCI (redhat346.localdomain) 133> info replicat myrep

REPLICAT MYREP Last Started 2010-03-08 13:48 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:07 ago)
Log Read Checkpoint File /u01/oracle/ggs/dirdat/rm000027
2010-03-08 13:26:43.000861 RBA 918990

GGSCI (redhat346.localdomain) 134> lag replicat myrep

Sending GETLAG request to REPLICAT MYREP …
Last record lag: 1363 seconds.
At EOF, no more records to process.

TEST!

Now check and confirm from the database if second update statement (UPDATE 2) run on the source database has been applied on the target

SQL> select distinct cust_city from mycustomers;

CUST_CITY
——————————
Hong Kong

We can now point our clients to the upgraded 11g database!

Coming next in the series! – Installing and configuring GoldenGate Director …..


Viewing all articles
Browse latest Browse all 80

Trending Articles