Wednesday, March 31, 2010

Export from 11g and Import into 10g Database

A common belief is that we can export from Lower version of Oracle Database and import into Higher version Database and not the other way round.

But this is possible as my activity here is documented but only for Datapump not with regular export import.

This procedure is also valid for SCHEMA level and FULL DATABASE.

PRODUCTION : 10.2.0.3.0 (Target)
TEST: 11.1.0.6(Source)

I had to export some tables from Test(11.1.0.6) and import into Production(10.2.0.3.0)

We just need to append the expdp command with "VERSION" e.g

expdp MDORA TABLES=TSTOCK_CATEGDET,TSTOCK_SUBCATEGDET directory=DATA_PUMP_DIR dumpfile=stock.dmp logfile=stock.log version=10.2.0.3


C:\Documents and Settings\dbadmin>expdp MDORA TABLES=TSTOCK_CATEGDET,TSTOCK
_SUBCATEGDET directory=DATA_PUMP_DIR dumpfile=stock.dmp logfile=stock.log
version=10.2.0.3

Export: Release 11.1.0.6.0 - Production on Wednesday, 31 March, 2010 16:07:12

Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Produc
tion
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "MDORA"."SYS_EXPORT_TABLE_01": MDORA/******** TABLES=TSTOCK_CATEGD
ET,TSTOCK_SUBCATEGDET directory=DATA_PUMP_DIR dumpfile=stock.dmp logfile=stock.log version=10.2.0.3
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "MDORA"."TSTOCK_CATEGDET" 5.671 KB 6 rows
. . exported "MDORA"."TSTOCK_SUBCATEGDET" 6.421 KB 13 rows
Master table "MDORA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MDORA.SYS_EXPORT_TABLE_01 is:
C:\ORACLE\PRODUCT\10.2.0\ADMIN\TESTDB\DPDUMP\stock.DMP
Job "MDORA"."SYS_EXPORT_TABLE_01" successfully completed at 16:08:02


C:\Documents and Settings\dbadmin>impdp MDORA TABLES=TSTOCK_CATEGDET,TSTOCK
_SUBCATEGDET directory=DATA_PUMP_DIR dumpfile=stock.dmp logfile=stock_imp.
log

Import: Release 10.2.0.3.0 - 64bit Production on Wednesday, 31 March, 2010 16:14
:28

Copyright (c) 2003, 2005, Oracle. All rights reserved.
Password:



Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit
Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
Master table "MDORA"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MDORA"."SYS_IMPORT_TABLE_01": MDORA/******** TABLES=TSTOCK_CATEGD
ET,TSTOCK_SUBCATEGDET directory=DATA_PUMP_DIR dumpfile=stock.dmp logfile=stock_imp.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MDORA"."TSTOCK_CATEGDET" 5.671 KB 6 rows
. . imported "MDORA"."TSTOCK_SUBCATEGDET" 6.421 KB 13 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "MDORA"."SYS_IMPORT_TABLE_01" successfully completed at 16:15:03

Wednesday, March 17, 2010

Creating Physical Standby Database

Managing Standby Database

I have the following environment :
Primary : primary (node1 – rac01 , node2 – rac02) [ 2 node RAC with ASM ]
Standby (local ) : standby1 [ ASM ]
Standby (DR site) : standby2 [ASM]
Operating System : Windows 2003
Scenario :

I found that I have archive log gap on my standby1(local standby), but I was not as lucky like the previous scenario . The archive log was deleted from all the servers.

The only option I am left with is recreating the standby server again.

Please note : recreating a standby server is NOT same as creating the standby server for the 1st time .. some steps can be omitted .
The steps described below have some steps marked “OPTIONAL”. Those steps can be omitted while recreating a standby database.
Step 1
Enable Force Logging

Sql>Select Force_Logging From V$Database;

Sql> Alter Database Force Logging; --- Primary Database

This Statement May Take A Considerable Amount Of Time To Complete, Because It Waits For All Unlogged Direct Write I/O Operations To Finish.

Step 2
Ensure That Your Primary Database Is In Archivelog Mode

Sql> Archive Log List;
Ensure That The Primary Database Is In ARCHIVELOG Mode.

If it is not in archive log mode then enable archive log mode in the RAC environment as per following.

1) set cluster_database=false for the instance:
alter system set cluster_database=false scope=spfile sid=’primary1’;
2) Shut Down all the instances accessing the database:
srvctl stop database –d primary
3) Mount the database using local instance:
SQL> startup mount;
4) Enable Archiving:
SQL> alter database archivelog;
5) Change the parameter cluster_database=true for the instance primary1:
SQL> alter system set cluster_database=true scope=spfile sid=’primary1’;
6) Shut down the local instance:
SQL> shutdown ;
7) Bring up all the instances:
C:\ srvctl start database –d primary

Once in archive log mode,each instance can archive redologs automatically.

Step 3
Gathering Files and Perform Backup

1. On the primary node, create a staging directory. For example:
G:\ mkdir RAC_BACKUP
2. Create the same exact path on the standby host:
G:\ mkdir RAC_BACKUP
3. On the primary node, connect to the primary database and create a PFILE from the SPFILE in the staging directory. For example:
SQL> CREATE PFILE='g:\RAC_BACKUP\initprimary.ora' FROM SPFILE;
4. On the primary node (RAC01), perform an RMAN backup of the primary database that places the backup pieces into the staging directory. For example:
C:\ rman target /
RMAN>backup device type disk format tag 'standby_ctl' '\\RAC02\RAC_BACKUP\bkp_ctl_stdby_%U' current controlfile for standby;
RMAN>run {backup incremental level=0 tag 'standby_standby1' format'\\RAC02\RAC_BACKUP\RMAN_primary_db_f_%T_%s.bak'
(database include current controlfile);
backup archivelog all tag 'standby_arc' format \\RAC02\RAC_BACKUP\RMAN_primary_arc_f_%T_%s.bak'delete input;
}

5. Place a copy of the listener.ora, tnsnames.ora, and sqlnet.ora files into the staging directory. For example:
copy $ORACLE_HOME\network\admin\*.ora G:\ RAC_BACKUP
Copy the contents of the staging directory on the RAC node to the staging directory on the standby host. For example:
G:\ copy G:\RAC_BACKUP\* \\localstandby\RAC_BACKUP\

7.
RMAN> CATALOG START WITH '\\localstandby\RAC_BACKUP\' ;

searching for all files that match the pattern \\localstandby\RAC_BACKUP\

List of Files Unknown to the Database
=====================================
File Name: \\LOCALSTANDBY\RAC_BACKUP\BKP_CTL_STDBY_1PL8EVUR_1_1.BKP
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_ARC_F_20100313_1087.BAK
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_ARC_F_20100313_1088.BAK
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_DB_F_20100313_1085.BAK
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_DB_F_20100313_1086.BAK

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: \\LOCALSTANDBY\RAC_BACKUP\BKP_CTL_STDBY_1PL8EVUR_1_1.BKP
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_ARC_F_20100313_1087.BAK
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_ARC_F_20100313_1088.BAK
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_DB_F_20100313_1085.BAK
File Name: \\LOCALSTANDBY\RAC_BACKUP\RMAN_primary_DB_F_20100313_1086.BAK

Step 4 (Optional)
Configure Oracle NET SERVICES ON THE STANDBY


1. Copy the listener.ora, tnsnames.ora, and sqlnet.ora files from the staging directory on the standby host to the $ORACLE_HOME/network/admin directory on the standby host.
2. Modify the listener.ora file on the standby host to contain the hostname of the standby host.
3. Modify the tnsnames.ora file on each node, including the primary RAC nodes and standby host, to contain all primary and standby net service names. In this example, each tnsnames.ora file should contain all three of the net service names in the following table:

PRIMARY NET SERVICES NAMES STDBY NET SERVICES NAMES
primary1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = primary.san.com)
(INSTANCE_NAME = primary1)
)
)

standby1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 150.1.1.243)(PORT = 1521))
)
(CONNECT_DATA =
(SID = standby1)
(SERVER = Dedicated)
(SERVICE_NAMEs = primary)
)
)


primary2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = primary.san.com)
(INSTANCE_NAME = primary2)
)
)


Step 5 (Optional)
Create the Physical Standby Instance and Database

If The Standby System Is Running On A Windows System, Use The ORADIM Utility To Create A Windows Service. For Example:
WINNT> Oradim -NEW -SID standby1 -STARTMODE Manual

D:\oracle\product\10.2.0\db_1\database\PWDstandby1.ORA

C:\ Orapwd File= PWDstandby1.ORA Password=<> Entries=10
--- Use Of .Ora Is Essential Else It Will Give An Error
Saying Insufficient Privilege.
Copy and rename the primary database PFILE from the staging area on the standby host to the $ORACLE_HOME/dbs directory on the standby host. For example:
$copy initprimary.ora $ORACLE_HOME/dbs/initstandby1.ora
Modify the standby initialization parameter file copied from the primary node to remove RAC parameters and to include Data Guard parameters as illustrated in the following table:
Initialization Parameter Modifications

Parameter Category Before After
RAC Parameters *.cluster_database=true
*.db_unique_name=primary
RAC01.instance_name=primary1
RAC02.instance_name=primary2
RAC01.instance_number=1
RAC02.instance_number=2
RAC01.thread=1
RAC02.thread=2
RAC01.undo_tablespace=UNDOTBS1
RAC02.undo_tablespace=UNDOTBS2
*.remote_listener=LISTENERS_primary
RAC01.LOCAL_LISTENER=LISTENER_primary_HOST1
RAC02.LOCAL_LISTENER=LISTENER_primary_HOST2
*.cluster_database=false
*.db_unique_name=standby1
*.instance_name=standby1
*.thread=1
*.undo_tablespace=UNDOTBS1

DataGuard Parameters *.log_archive_config=’dg_config= (primary,standby1)’ *.log_archive_dest_3=’service=primary1 valid_for=(online_logfiles,primary_role)
db_unique_name=standby1’
*.standby_file_management=AUTO *.fal_server=’primary1’,’primary2’
*.fal_client=’standby1’ *.service_names=’standby1’


Connect to the ASM instance on the standby host, and create a directory within the DATA disk group that has the same name as the DB_UNIQUE_NAME of the physical standby database. For example:
SQL> alter diskgroup data add directory '+DATA/standby1';
SQL> alter diskgroup data add directory ‘+DATA/primary’;
Connect to the physical standby database, with the standby in the IDLE state, and create an SPFILE in the standby DATA disk group:
SQL> CREATE SPFILE='+DATA/standby1/spfilestandby1.ora' FROM PFILE='?/dbs/initstandby1.ora';

Create the dump directories on the standby host as referenced in the standby initialization parameter file. For example:
C:\ mkdir D:\oracle\product\10.2.0\admin\primary\bdump
C:\ mkdir C:\ mkdir D:\oracle\product\10.2.0\admin\primary\cdump
C:\ mkdir C:\ mkdir D:\oracle\product\10.2.0\admin\primary\udump
C:\ mkdir C:\ mkdir D:\oracle\product\10.2.0\admin\primary\adump


Step 6
Create the Physical Standby Instance and Database

After setting up the appropriate environment variables on the standby host, such as ORACLE_SID, ORACLE_HOME, and PATH, start the physical standby database instance without mounting the control file.
SQL> STARTUP NOMOUNT
From the standby host, duplicate the primary database as a standby into the ASM disk group. For example:
C:\ rman target sys/oracle@primary1 auxiliary /
RMAN> duplicate target database for standby nofilenamecheck

Starting Duplicate Db at 15-MAR-10
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=684 devtype=DISK

contents of Memory Script:
{
restore clone standby controlfile;
sql clone 'alter database mount standby database';
}
executing Memory Script

Starting restore at 15-MAR-10
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece \\RAC02\RAC_BACKUP\BKP_CTL_STD
BY_2PL8KFTC_1_1
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=\\RAC02\RAC_BACKUP\BKP_CTL_STDBY_2PL8KFTC_1_1 tag=STANDBY_CTL
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:17
output filename=+DATA/standby1/controlfile/current.311.713707301
output filename=+IDX/standby1/controlfile/current.283.713707303
output filename=+UNDO/standby1/controlfile/current.265.713707303
Finished restore at 15-MAR-10


channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to +DATA
restoring datafile 00002 to +UNDO
restoring datafile 00003 to +DATA
restoring datafile 00004 to +UNDO
restoring datafile 00005 to +DATA
restoring datafile 00006 to +DATA/standby1/datafile/sanauditprimary.dbf
restoring datafile 00007 to +DATA/standby1/datafile/sanprojprimary.dbf
restoring datafile 00008 to +IDX/standby1/datafile/sanindx10aprimary.dbf
restoring datafile 00009 to +DATA/standby1/datafile/sanindx10primary.dbf
restoring datafile 00010 to +IDX/standby1/datafile/sanindx11aprimary.dbf
restoring datafile 00011 to +DATA/standby1/datafile/sanindx11primary.dbf
restoring datafile 00012 to +IDX/standby1/datafile/sanindx12primary.dbf
restoring datafile 00013 to +IDX/standby1/datafile/sanindx13primary.dbf
restoring datafile 00014 to +IDX/standby1/datafile/sanindx14primary.dbf
restoring datafile 00015 to +IDX/standby1/datafile/sanindx15primary.dbf
restoring datafile 00016 to +IDX/standby1/datafile/sanindx16primary.dbf
restoring datafile 00017 to +IDX/standby1/datafile/sanindx2primary.dbf
restoring datafile 00018 to +IDX/standby1/datafile/sanindx3primary.dbf
restoring datafile 00019 to +IDX/standby1/datafile/sanindx6primary.dbf
restoring datafile 00020 to +IDX/standby1/datafile/sanindx71primary.dbf
restoring datafile 00021 to +DATA/standby1/datafile/sanindx7aprimary.dbf
restoring datafile 00022 to +DATA/standby1/datafile/sanindx7primary.dbf
restoring datafile 00023 to +IDX/standby1/datafile/sanindx8primary.dbf
restoring datafile 00024 to +IDX/standby1/datafile/sanindx9primary.dbf
restoring datafile 00025 to +DATA/standby1/datafile/sanora1primary.dbf
restoring datafile 00026 to +DATA/standby1/datafile/sanora1aprimary.dbf
restoring datafile 00027 to +DATA/standby1/datafile/sanora10primary.dbf
restoring datafile 00028 to +DATA/standby1/datafile/sanora11primary.dbf
restoring datafile 00029 to +DATA/standby1/datafile/sanora11aprimary.dbf
restoring datafile 00030 to +DATA/standby1/datafile/sanora12primary.dbf
restoring datafile 00031 to +DATA/standby1/datafile/sanora13primary.dbf
restoring datafile 00032 to +DATA/standby1/datafile/sanora14primary.dbf
restoring datafile 00033 to +DATA/standby1/datafile/sanora14aprimary.dbf
restoring datafile 00034 to +DATA/standby1/datafile/sanora15primary.dbf
restoring datafile 00035 to +DATA/standby1/datafile/sanora16primary.dbf
restoring datafile 00036 to +DATA/standby1/datafile/sanora2primary.dbf
restoring datafile 00037 to +DATA/standby1/datafile/sanora3primary.dbf
restoring datafile 00038 to +DATA/standby1/datafile/sanora4primary.dbf
restoring datafile 00039 to +DATA/standby1/datafile/sanora5primary.dbf
restoring datafile 00040 to +DATA/standby1/datafile/sanora6primary.dbf
restoring datafile 00041 to +DATA/standby1/datafile/sanora7primary.dbf
restoring datafile 00042 to +DATA/standby1/datafile/sanora8primary.dbf
restoring datafile 00043 to +DATA/standby1/datafile/sanora9primary.dbf
restoring datafile 00044 to +UNDO/standby1/datafile/sanora0primary.dbf
restoring datafile 00045 to +DATA/standby1/datafile/saneul01.dbf
restoring datafile 00046 to +DATA/standby1/datafile/sanworkflow.dbf
channel ORA_AUX_DISK_1: reading from backup piece \\RAC02\RAC_BACKUP\RMAN_primary_D
B_F_20100315_1115.BAK
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=\\RAC02\RAC_BACKUP\RMAN_primary_DB_F_20100315_1115.BAK tag=STANDBY_standby1

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:58:15
Finished restore at 15-MAR-10

contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy recid=98 stamp=713710840 filename=+DATA/standby1/datafile/system.
304.713709767
datafile 2 switched to datafile copy
input datafile copy recid=99 stamp=713710840 filename=+UNDO/standby1/datafile/undotbs
1.258.713707345
datafile 3 switched to datafile copy
input datafile copy recid=100 stamp=713710840 filename=+DATA/standby1/datafile/sysaux
.309.713709327
datafile 4 switched to datafile copy
input datafile copy recid=101 stamp=713710840 filename=+UNDO/standby1/datafile/undotb
s2.264.713707343
datafile 5 switched to datafile copy
input datafile copy recid=102 stamp=713710840 filename=+DATA/standby1/datafile/users.
290.713710563
datafile 6 switched to datafile copy
input datafile copy recid=103 stamp=713710840 filename=+DATA/standby1/datafile/sanaud
itprimary.dbf
datafile 7 switched to datafile copy
input datafile copy recid=104 stamp=713710840 filename=+DATA/standby1/datafile/sancoa
tprimary.dbf
datafile 8 switched to datafile copy
input datafile copy recid=105 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
10aprimary.dbf
datafile 9 switched to datafile copy
input datafile copy recid=106 stamp=713710840 filename=+DATA/standby1/datafile/sanind
x10primary.dbf
datafile 10 switched to datafile copy
input datafile copy recid=107 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
11aprimary.dbf
datafile 11 switched to datafile copy
input datafile copy recid=108 stamp=713710840 filename=+DATA/standby1/datafile/sanind
x11primary.dbf
datafile 12 switched to datafile copy
input datafile copy recid=109 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
12primary.dbf
datafile 13 switched to datafile copy
input datafile copy recid=110 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
13primary.dbf
datafile 14 switched to datafile copy
input datafile copy recid=111 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
14primary.dbf
datafile 15 switched to datafile copy
input datafile copy recid=112 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
15primary.dbf
datafile 16 switched to datafile copy
input datafile copy recid=113 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
16primary.dbf
datafile 17 switched to datafile copy
input datafile copy recid=114 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
2primary.dbf
datafile 18 switched to datafile copy
input datafile copy recid=115 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
3primary.dbf
datafile 19 switched to datafile copy
input datafile copy recid=116 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
6primary.dbf
datafile 20 switched to datafile copy
input datafile copy recid=117 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
71primary.dbf
datafile 21 switched to datafile copy
input datafile copy recid=118 stamp=713710840 filename=+DATA/standby1/datafile/sanind
x7aprimary.dbf
datafile 22 switched to datafile copy
input datafile copy recid=119 stamp=713710840 filename=+DATA/standby1/datafile/sanind
x7primary.dbf
datafile 23 switched to datafile copy
input datafile copy recid=120 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
8primary.dbf
datafile 24 switched to datafile copy
input datafile copy recid=121 stamp=713710840 filename=+IDX/standby1/datafile/sanindx
9primary.dbf
datafile 25 switched to datafile copy
input datafile copy recid=122 stamp=713710840 filename=+DATA/standby1/datafile/sanora
1primary.dbf
datafile 26 switched to datafile copy
input datafile copy recid=123 stamp=713710841 filename=+DATA/standby1/datafile/sanora
1aprimary.dbf
datafile 27 switched to datafile copy
input datafile copy recid=124 stamp=713710841 filename=+DATA/standby1/datafile/sanora
10primary.dbf
datafile 28 switched to datafile copy
input datafile copy recid=125 stamp=713710841 filename=+DATA/standby1/datafile/sanora
11primary.dbf
datafile 29 switched to datafile copy
input datafile copy recid=126 stamp=713710841 filename=+DATA/standby1/datafile/sanora
11aprimary.dbf
datafile 30 switched to datafile copy
input datafile copy recid=127 stamp=713710841 filename=+DATA/standby1/datafile/sanora
12primary.dbf
datafile 31 switched to datafile copy
input datafile copy recid=128 stamp=713710841 filename=+DATA/standby1/datafile/sanora
13primary.dbf
datafile 32 switched to datafile copy
input datafile copy recid=129 stamp=713710841 filename=+DATA/standby1/datafile/sanora
14primary.dbf
datafile 33 switched to datafile copy
input datafile copy recid=130 stamp=713710841 filename=+DATA/standby1/datafile/sanora
14aprimary.dbf
datafile 34 switched to datafile copy
input datafile copy recid=131 stamp=713710841 filename=+DATA/standby1/datafile/sanora
15primary.dbf
datafile 35 switched to datafile copy
input datafile copy recid=132 stamp=713710841 filename=+DATA/standby1/datafile/sanora
16primary.dbf
datafile 36 switched to datafile copy
input datafile copy recid=133 stamp=713710841 filename=+DATA/standby1/datafile/sanora
2primary.dbf
datafile 37 switched to datafile copy
input datafile copy recid=134 stamp=713710841 filename=+DATA/standby1/datafile/sanora
3primary.dbf
datafile 38 switched to datafile copy
input datafile copy recid=135 stamp=713710841 filename=+DATA/standby1/datafile/sanora
4primary.dbf
datafile 39 switched to datafile copy
input datafile copy recid=136 stamp=713710841 filename=+DATA/standby1/datafile/sanora
5primary.dbf
datafile 40 switched to datafile copy
input datafile copy recid=137 stamp=713710841 filename=+DATA/standby1/datafile/sanora
6primary.dbf
datafile 41 switched to datafile copy
input datafile copy recid=138 stamp=713710841 filename=+DATA/standby1/datafile/sanora
7primary.dbf
datafile 42 switched to datafile copy
input datafile copy recid=139 stamp=713710841 filename=+DATA/standby1/datafile/sanora
8primary.dbf
datafile 43 switched to datafile copy
input datafile copy recid=140 stamp=713710841 filename=+DATA/standby1/datafile/sanora
9primary.dbf
datafile 44 switched to datafile copy
input datafile copy recid=141 stamp=713710841 filename=+UNDO/standby1/datafile/sanora
0primary.dbf
datafile 45 switched to datafile copy
input datafile copy recid=142 stamp=713710841 filename=+DATA/standby1/datafile/saneul
01.dbf
datafile 46 switched to datafile copy
input datafile copy recid=143 stamp=713710841 filename=+DATA/standby1/datafile/sanwor
kflow.dbf
Finished Duplicate Db at 15-MAR-10

RMAN>



========== WATCH OUT !!! ========
Error:
WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to diskgroup only.
Workaround :
In Parameter file DO NOT mention :
*.control_files='+DATA/standby1/control01.ctl','+IDX/standby1/control02.ctl',
'+UNDO/standby1/controlfile03.ctl'#

Rather name it as :
*.control_files='+DATA','+IDX','+UNDO'#

========== WATCH OUT !!! ========



Connect to the physical standby database, and create the standby redo logs to support the standby role. The standby redo logs must be the same size as the primary database online logs. The recommended number of standby redo logs is:
(maximum # of logfiles +1) * maximum # of threads

We are using 10 online log files for each thread. Thus, the number of standby redo logs should be (10 + 1) * 2 = 22. That is, one more standby redo log file for each thread.
i.e. ALTER DATABASE ADD STANDBY LOGFILE
GROUP 11 ( '+DATA/primary/redo11.log') SIZE 51200K.
You can check the number and group numbers of the redo logs by querying the V$LOG view:
SQL> SELECT * FROM V$LOG;

You can check the results of the previous statements by querying the V$STANDBY_LOG view:
SQL> SELECT * FROM V$STANDBY_LOG;
You can also see the members created by querying the V$LOGFILE view:
SQL> SELECT * FROM V$LOGFILE;
Start managed recovery and real-time apply on the standby database:
SQL> ALTER DATABASE recover managed standby database using current logfile disconnect;
========== WATCH OUT !!! ========


----- There is a catch in STEP – 6 ----
RMAN DUPLICATE COMMAND will never create the standby redo log files.
We can easily find the errors by crosschecking like this :

DGMGRL> show database 'primary' 'StatusReport';
STATUS REPORT
INSTANCE_NAME SEVERITY ERROR_TEXT
primary1 WARNING ORA-16715: redo transport-related property LogXp
tMode of standby database "standby1" is inconsistent
primary1 WARNING ORA-16715: redo transport-related property LogXp
tMode of standby database "standby2" is inconsistent
primary2 WARNING ORA-16715: redo transport-related property LogXp
tMode of standby database "standby1" is inconsistent
primary2 WARNING ORA-16715: redo transport-related property LogXp
tMode of standby database "standby2" is inconsistent
(OR)
SQL> select message from v$dataguard_status where severity like 'Warning';

MESSAGE
--------------------------------------------------------------------------------

RFS[14]: No standby redo logfiles created
RFS[9]: No standby redo logfiles created
RFS[13]: No standby redo logfiles created
RFS[14]: No standby redo logfiles created
RFS[14]: No standby redo logfiles created
RFS[13]: No standby redo logfiles created
RFS[14]: No standby redo logfiles created
RFS[14]: No standby redo logfiles created
RFS[13]: No standby redo logfiles created
RFS[13]: No standby redo logfiles created
RFS[14]: No standby redo logfiles created

(OR)


---- Alert.log message ----

Sun Mar 07 15:59:00 2010
The input backup piece \\backupserver\RAC_COLDBACKUP\RMAN_BKP_20100303_1009.BAK is in compressed format.
The input backup piece \\backupserver\RAC_COLDBACKUP\RMAN_BKP_20100303_1000.BAK is in compressed format.
Sun Mar 07 15:59:12 2010

Errors in file d:\oracle\product\10.2.0\admin\primary\udump\standby1_rfs_1948.trc:
ORA-00313: open failed for members of log group 11 of thread 1
ORA-00312: online log 11 thread 1: '+DATA/primary/redo11.log'
ORA-17503: ksfdopn:2 Failed to open file +DATA/primary/redo11.log
ORA-15173: entry 'redo11.log' does not exist in directory 'primary'

Sun Mar 07 15:59:52 2010
Errors in file d:\oracle\product\10.2.0\admin\primary\udump\standby1_rfs_1948.trc:
ORA-00313: open failed for members of log group 11 of thread 1
ORA-00312: online log 11 thread 1: '+DATA/primary/redo11.log'
ORA-17503: ksfdopn:2 Failed to open file +DATA/primary/redo11.log
ORA-15173: entry 'redo11.log' does not exist in directory 'primary'

--- Alert.log message -----

========== WATCH OUT !!! ========




PROCEDURE TO RECREATE :

As the redo log files are not being created by the RMAN DUPLICATE command , we have to do it manually:


But before that we need to check a few things .



The calculation for creating number of standby redo logs for Primary is
(maximum # of logfiles +1) * maximum # of threads


Query v$log and v$logfile on PRIMARY

SQL> select type,member from v$logfile;

TYPE MEMBER
---------------------------------------- ---------------------------------------
-
ONLINE +DATA/primary/redo01.log
ONLINE +DATA/primary/redo02.log
ONLINE +IDX/primary/redo03.log
ONLINE +UNDO/primary/redo04.log
ONLINE +DATA/primary/redo05.log
ONLINE +UNDO/primary/redo06.log
ONLINE +DATA/primary/redo07.log
ONLINE +DATA/primary/redo08.log
ONLINE +UNDO/primary/redo09.log
ONLINE +DATA/primary/redo010.log
STANDBY +DATA/primary/redo11.log
STANDBY +DATA/primary/redo12.log
STANDBY +DATA/primary/redo13.log
STANDBY +DATA/primary/redo14.log
STANDBY +DATA/primary/redo15.log
STANDBY +DATA/primary/redo16.log
STANDBY +DATA/primary/redo17.log
STANDBY +DATA/primary/redo18.log
STANDBY +DATA/primary/redo19.log
STANDBY +DATA/primary/redo20.log
STANDBY +DATA/primary/redo21.log
STANDBY +DATA/primary/redo22.log

22 rows selected.









Now check the STANDBY :


SQL> select type,member from v$logfile;

TYPE MEMBER
---------------------------------------- ---------------------------------------
-
ONLINE +DATA/primary/redo01.log
ONLINE +DATA/primary/redo02.log
ONLINE +IDX/primary/redo03.log
ONLINE +UNDO/primary/redo04.log
ONLINE +DATA/primary/redo05.log
ONLINE +UNDO/primary/redo06.log
ONLINE +DATA/primary/redo07.log
ONLINE +DATA/primary/redo08.log
ONLINE +UNDO/primary/redo09.log
ONLINE +DATA/primary/redo010.log
10 rows selected.

STEP 1:

SQL > alter system set standby_file_management=manual;

system altered

STEP 2: PLEASE NOTE ONLINE REDO LOG filesize on PRIMARY must be equal to STANDBY REDO LOG files on STANDBY.


Now we will create standby redo logs.

alter database add standby logfile '+DATA/primary/redo11.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo12.log' size 52428800;
alter database add standby logfile '+IDX/primary/redo13.log' size 52428800;
alter database add standby logfile '+UNDO/primary/redo14.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo15.log' size 52428800;
alter database add standby logfile '+UNDO/primary/redo16.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo17.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo18.log' size 52428800;
alter database add standby logfile '+UNDO/primary/redo19.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo20.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo21.log' size 52428800;
alter database add standby logfile '+DATA/primary/redo22.log' size 52428800;


SQL > alter system set standby_file_management=auto;



Queries to verify Standby Database is created properly:
Select Low_Sequence#,High_Sequence# From V$Archive_Gap;

-------Should return “ No Rows Selected “ else follow scenario 1----

Select Max(Sequence#) From V$Archived_Log;

Select Sequence#,Applied ,Status From V$Archived_Log;

Select Process,Status ,Client_Process From V$Managed_Standby;

-------You should be able to see MRP0 process-----
To verify further check Alert.log:

==Alert.log message when Managed Recovery Process (MRP0) started ===

MRP0 started with pid=32, OS id=5400
Managed Standby Recovery starting Real Time Apply
parallel recovery started with 3 processes
Media Recovery Waiting for thread 1 sequence 8983 (in transit)
Mon Mar 15 15:40:02 2010
Recovery of Online Redo Log: Thread 1 Group 24 Seq 8983 Reading mem 0
Mem# 0: +DATA/primary/redo12.log

==Alert.log message when Managed Recovery Process (MRP0) started ===


select Dest_name,error from v$archive_dest

SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE# desc;

Resolving Archive Gaps in Standby Database

Primary : primary (node1 – rac01 , node2 – rac02) [ 2 node RAC with ASM ]
Standby (local ) : standby1 [ ASM ]
Standby (DR site) : standby2 [ASM]
Operating System : Windows 2003

Scenario :


I found that I have archive log gap on my standby1(local standby), but luckily I had the archive log that was missing in standby1 in primary( Primary).


1>Fire this query on the standby database :

SQL> Select Low_Sequence#,High_Sequence# From V$Archive_Gap;

no rows selected

This means there is no archivelog gap in the standby database.

--- Else --

SQL> Select Low_Sequence#,High_Sequence# From V$Archive_Gap;


THEN FOLLOW THE PROCEDURE :

Query v$archived_log on the PRIMARY to find the location of the missing archive log files.

SQL> SELECT name
2 FROM v$archived_log
3 where sequence# BETWEEN 10270 and 10271;

NAME
--------------------------------------------------------------------------------

E:\ARCHIVE\ARC10270_0647314995.002
+ARCH/primary/arc10270_0647314995.002
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=DBDRC)(PORT=1521)))(CONN

ECT_DATA=(SERVICE_NAME=standby2_XPT.san.com)(INSTANCE_NAME=standby2)(SERVER=dedicated)))


E:\ARCHIVE\ARC10271_0647314995.002
+ARCH/primary/arc10271_0647314995.002
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=DBDRC)(PORT=1521)))(CONN

ECT_DATA=(SERVICE_NAME=standby2_XPT.san.com)(INSTANCE_NAME=standby2)(SERVER=dedicated)))


2>Copy the archive logs from PRIMARY Database to STANDBY Database


On the STANDBY :

SQL> alter database register logfile 'E:\archive\ARC10270_0647314995.002';

Database altered.

SQL> alter database register logfile 'E:\archive\ARC10271_0647314995.002';

Database altered.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

SQL> Select Low_Sequence#,High_Sequence# From V$Archive_Gap;

no rows selected





========== WATCH OUT !!! ========


POSSIBLE ERROR MESSAGES
========================

SQL> select message from v$dataguard_status where severity like 'Warning';

MESSAGE
--------------------------------------------------------------------------------

RFS[1]: No standby redo logfiles created
RFS[2]: No standby redo logfiles created
RFS[2]: No standby redo logfiles created
RFS[2]: Successfully opened standby log 11: '+DATA/primary/redo11.log'
RFS[2]: Possible network disconnect with primary database
RFS[1]: Possible network disconnect with primary database
RFS[4]: Successfully opened standby log 11: '+DATA/primary/redo11.log'
RFS[5]: Successfully opened standby log 11: '+DATA/primary/redo11.log'
RFS[6]: Successfully opened standby log 12: '+DATA/primary/redo12.log'
RFS[5]: Successfully opened standby log 11: '+DATA/primary/redo11.log'

--- Alert.log contents----


Wed Nov 18 08:58:06 2009
ARCH: Possible network disconnect with primary database
FAL[server]: Fail to queue the whole FAL gap
GAP - thread 1 sequence 6501-6501
DBID 1279187827 branch 647314995
ARCH: Possible network disconnect with primary database
Wed Nov 18 09:19:46 2009
Primary database is in MAXIMUM PERFORMANCE mode
RFS[4]: Successfully opened standby log 14: '+DATA/primary/redo14'
Wed Nov 18 09:19:49 2009
Fetching gap sequence in thread 1, gap sequence 5726-5822
Wed Nov 18 09:20:53 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 5726-5822
DBID 1279187827 branch 647314995
FAL[client]: All defined FAL servers have been attempted.


--- Alert.log contents------

========== WATCH OUT !!! ========