Data Guard Physical Standby Database

Introduction

Oracle Data Guard is one of the software solutions provided by Oracle Corporation to maximize high availability of Oracle databases. Oracle Data Guard maintains one or many secondary databases as alternatives to the primary production database.

Data Guard Architecture

Oracle Data Guard supports both physical standby and logical standby sites.

Physical Standby: When the primary database transactions generate redo entries, a redo apply process keeps up the secondary databases with the exact block copies of the primary database.

Logical Standby: SQL apply processes read the redo and convert it to SQL transactions. These are then applied to the secondary database.

Data Guard Modes

Oracle Data Guard can operate in 3 different modes:

  • Maximum Protection: Transactions are not allowed to commit until all redo data are written to the online redo logs and propagated to at least one synchronized secondary database. If for any reason, the primary database cannot propagate its redo stream to one secondary database, the primary will shutdown to ensure maximum protection mode.
  • Maximum Performance: Transactions are not allowed to commit as soon as the redo are written to the online redo logs. The redo stream is asynchronously propagated to the secondary databases to ensure maximum performance mode.
  • Maximum Availability: Transactions are not allowed to commit until all redo data are written to the online redo logs and propagated to at least one synchronized secondary database. If for any reason, the primary database cannot propagate its redo stream to one secondary database, the primary will NOT shutdown and operates as it it were in maximum performance mode until issues are fixed.

Glossary

  • Role transition: Changing the role of each database component from primary database to the secondary database or from secondary database to the primary database.
  • Switchover: Planned role transition for testing. Manual intervention.
  • Fail Over: Unplanned failure. Manual or Automatic intervention. Automatic role transition is the recommended.
  • Primary database: Where the users are connected to access to the database.
  • Standby database: Exists in the the disaster recovery (DR) site. Where the users are connected in the case of planned role transition (Switchover) or in the case of unplanned failure (Fail Over).
  • Data Guard Observer: Process monitors both primary and standby databases and performs an automatic fail over when necessary.
  • The Broker: The management framework for Oracle Data Guard. It comes integrated into the oracle database enterprise edition.
  • Fast Start Fail Over (FSFO): Automatic fail over to the standby database occurs in case of failure. FSFO requires the broker.

The Environment

  • I have 3 VM with version of Oracle Enterprise Linux 5 installed.
  • I will use the Oracle Database 11g Enterprise Edition Release 2; 11.2.0.1.0 on the primary database and the standby database.
  • Oracle software is installed and a database is already created on the primary site.
  • Oracle software is installed and the database will be created during this demonstration for the standby site.
  • On the third host, I have installed a version of Oracle database 11g Client with administrator option; 11.2.0.1.0. This is the Data Guard Observer host.
  • The Oracle Home is on identical path on both nodes: primary database and the standby database.
  • Primary database server name is dg1 with a database name dg1.
  • Standby database server name is dg2 with a database name dg2.
  • The Data Guard Observer server name is dg3.

For the rest of the document, I am going to implement a physical Data Guard environment in maximum performance mode. The whole document will be split in 2 parts:

  1. Prepare the Data Guard environment.
  2. Create the standby database and verity the physical standby database. I will then Configure the Data Guard Broker, enable Fast Start Fail Over (FSFO) and run a Data Guard Observer.

Implementation notes

These are the steps to follow:

  1. Enable forced logging
  2. Create a password file
  3. Configure a standby redo log
  4. Enable archiving
  5. Set up the primary database initialization parameters
  6. Configure the listener and tnsnames to support the database on both nodes

Note: It is recommended to apply all patches on the primary and the standby before setting up the Data Guard environment.

Prepare the primary and the physical Standby Environments

Primary Server: Step 1

Oracle Data Guard requires the primary database to be run in FORCE LOGGING mode. This means that statements using the NOLOGGING option will still generate redo information to maintain the Data Guard standby databases.

[oracle@dg1 ~]$ sqlplus / as sysdba

SQL> select force_logging from v$database;

FOR
---
NO

SQL> alter database force logging;

Database altered.

SQL>

The standby log files are required to store redo received from the primary database. Standby redo logs must be equal or larger to the largest redo log file in the primary database. The standby redo log must have at least one more redo log group than the redo log on the primary database. It is recommended that the standby redo log is created on the primary database and propagated to the standby for any switch over or fail over operations.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------
         1          1          7   52428800        512          2 YES INACTIVE                828952 15-JAN-12        849105 16-JAN-12
         2          1          8   52428800        512          2 NO  CURRENT                 849105 16-JAN-12    2.8147E+14
         3          1          6   52428800        512          2 YES INACTIVE                822304 15-JAN-12        828952 15-JAN-12

SQL> alter database add standby logfile size 50M;

Database altered.

SQL> 
SQL> alter database add standby logfile size 50M;

Database altered.

SQL> alter database add standby logfile size 50M;

Database altered.

SQL> alter database add standby logfile size 50M;

Database altered.

SQL> 

SQL> select * from v$logfile;

Next, set the LOG_ARCHIVE_CONFIG parameter. The LOG_ARCHIVE_CONFIG parameter enables or disables the sending of redo streams to the standby sites. The DB_UNIQUE_NAME of the primary database is dg1 and the DB_UNIQUE_NAME of the standby database is dg2. The primary database is configured to ship redo log stream to the standby database. In this example, the standby database service is dg2.

Next, STANDBY_FILE_MANAGEMENT is set to AUTO so that when Oracle files are added or dropped from the primary database, these changes are made to the standby databases automatically. The STANDBY_FILE_MANAGEMENT is only applicable to the physical standby databases.

Setting the STANDBY_FILE_MANAGEMENT parameter to AUTO is is recommended when using Oracle Managed Files (OMF) on the primary database. Next, the primary database must be running in ARCHIVELOG mode.

SQL> show parameter db_name

NAME                                 TYPE       VALUE
------------------------------------ ---------- ------------------------------
db_name                              string     dg1
SQL> show parameter db_unique_name

NAME                                 TYPE       VALUE
------------------------------------ ---------- ------------------------------
db_unique_name                       string     dg1
SQL> alter system set log_archive_config='dg_config=(dg1,dg2)';

System altered.

SQL> alter system set log_archive_dest_2=
  2  'service=dg2 async valid_for=(online_logfile,primary_role) db_unique_name=dg2';

System altered.

SQL> alter system set standby_file_management=AUTO;  

System altered.

SQL> 

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     7
Next log sequence to archive   9
Current log sequence           9
SQL> exit;

Ensure the Oracle Net Services aliases for both the primary database and standby database are added to the tnsnames.ora file.

[oracle@dg1 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora
dg1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dg1.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dg1)
    )
  )
dg2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dg2.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dg2)
    )
  )

[oracle@dg1 ~]$

Copy the updated tnsnames.ora file to the standby site (host).

Standby Server: Step 2

The Oracle database binaries have already been installed at this location ($ORACLE_HOME). The new standby database will have dg2 as the SID.

The listener on the standby site must be configured with a static service entry for the standby database. Restart the listener after modification.

[oracle@dg2 admin]$ cat listener.ora 
LISTENER =
  (ADDRESS_LIST=
       (ADDRESS=(PROTOCOL=tcp)(HOST=dg2.localdomain)(PORT=1521))
       (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))   
SID_LIST_LISTENER=
   (SID_LIST=
       (SID_DESC=
          (GLOBAL_DBNAME=dg2)
          (SID_NAME=dg2)                      
          (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1)
         )
        )

[oracle@dg2 admin]$ lsnrctl start

Create audit directory files under $ORACLE_BASE/admin/$ORACLE_SID/adump. In my example, I will also create the Fast recover area and the oradata directories.

[oracle@dg2 ~]$ mkdir -p $ORACLE_BASE/admin/dg2/adump
[root@dg2 ~]# mkdir -p /opt/oradata
[root@dg2 ~]# mkdir -p /opt/fast_recovery_area

Next, create a simple parameter file on the standby hist with the DB_NAME value.

[oracle@dg2 dbs]$ echo DB_NAME=dg2 > initdg2.ora
[oracle@dg2 dbs]$ cat initdg2.ora 
DB_NAME=dg2
[oracle@dg2 dbs]$

The primary database password file must be copied to the standby system for redo authentication.

[oracle@dg2 dbs]$ scp dg1:$ORACLE_HOME/dbs/orapwdg1 orapwdg2

In this second part we will setup the standby database and enable Fast Start Fail Over.

Standby Database Creation

Start the standby database instance in NOMOUNT start up mode:

SQL> startup nomount pfile=initdg2.ora;

Now that the configuration of the standby server is complete, let’s perform the duplication from the primary site.

Primary Server

Use the Recovery Manager (RMAN) to duplicate the primary database to the standby database.
Invoke RMAN; connect to the primary database as the sys user. Make an auxiliary connection to the standby instance:

RMAN> connect target sys

target database Password: 
connected to target database: DG1 (DBID=1753913301)

RMAN> connect auxiliary sys@dg2

auxiliary database Password: 
connected to auxiliary database: DG2 (not mounted)

RMAN> run{
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'dg1','dg2'
set db_unique_name='dg2'
set db_file_name_convert='/dg1/','/dg2/'
set log_file_name_convert='/dg1/','/dg2/'
set control_files='/opt/oradata/dg2/dg2.ctl'
set log_archive_max_processes='5'
set fal_client='dg2'
set fal_server='dg1'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(dg1,dg2)'
set log_archive_dest_2='service=dg1 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=dg1'
;
}

The RMAN duplication process is finished without errors. After the database duplication, we will force the log switch on the primary database to archive the current redo log group.

SQL> alter system switch logfile;

Standby Server

On the standby database, run the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE command to start redo apply.The USING CURRENT LOGFILE means the redo is applied as soon as it is received on the standby.The DISCONNECT option means the redo apply will run in background session.

SQL> alter database recover managed standby database 
    using current logfile disconnect;

Note that the current log sequence number on the standby is 10.

SQL> select sequence#, first_time, applied
  from v$archived_log
  order by sequence#;

 SEQUENCE# FIRST_TIM APPLIED
---------- --------- ---------
         9 16-JAN-12 YES
        10 16-JAN-12 IN-MEMORY

Primary Server

Let’s perform 3 additional log switches on the primary database. Then we will query the standby database to verify that the log files are applied to the standby.

SQL> alter system switch logfile;

Standby Server

We query the standby database. The logs were successfully transported and applied.

SQL> select sequence#, first_time, applied
    from v$archived_log
   order by sequence#;

 SEQUENCE# FIRST_TIM APPLIED
---------- --------- ---------
         9 16-JAN-12 YES
        10 16-JAN-12 YES
        11 16-JAN-12 YES
        12 16-JAN-12 YES
        13 16-JAN-12 IN-MEMORY

Broker Configuration

Configuring the broker is recommended because it simplifies data guard operations.

The DG_BROKER_START parameter must be set to TRUE.

Standby Server

SQL> alter system set dg_broker_start=TRUE;

The Oracle Net listener must be also configured with an additional static service identifier. The value of the GLOBAL_DBNAME attribute must be set to a concatenation of _DGMGRL.

[oracle@dg2 dbs]$ cat $ORACLE_HOME/network/admin/listener.ora
LISTENER =
  (ADDRESS_LIST=
       (ADDRESS=(PROTOCOL=tcp)(HOST=dg2.localdomain)(PORT=1521))
       (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))   
SID_LIST_LISTENER=
   (SID_LIST=
       (SID_DESC=
          (GLOBAL_DBNAME=dg2)
          (SID_NAME=dg2)                      
          (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1)
    )
   (SID_DESC=
          (GLOBAL_DBNAME=dg2_DGMGRL)
          (SID_NAME=dg2)
          (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1)
    )
 )

[oracle@dg2 dbs]$ lsnrctl status

(Entry truncated)
 (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dg2.localdomain)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))
Services Summary...
Service "dg2" has 1 instance(s).
  Instance "dg2", status UNKNOWN, has 1 handler(s) for this service...
Service "dg2_DGMGRL" has 1 instance(s).
  Instance "dg2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Primary Server

SQL> alter system set dg_broker_start=TRUE;

The Oracle Net listener on the primary database should have static service definitions:

[oracle@dg1 ~]$ cat $ORACLE_HOME/network/admin/listener.ora

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dg1.localdomain)(PORT = 1521))
    )
  )

SID_LIST_LISTENER=
   (SID_LIST=
       (SID_DESC=
          (GLOBAL_DBNAME=dg1)
          (SID_NAME=dg1)
          (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1)
    )
   (SID_DESC=
          (GLOBAL_DBNAME=dg1_DGMGRL)
          (SID_NAME=dg1)
          (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1)
    )
 )

ADR_BASE_LISTENER = /u01/app/oracle

[oracle@dg1 ~]$ lsnrctl start

(Entry truncated)
Services Summary...
Service "dg1" has 1 instance(s).
  Instance "dg1", status UNKNOWN, has 1 handler(s) for this service...
Service "dg1_DGMGRL" has 1 instance(s).
  Instance "dg1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Let’s create a broker configuration and identify the primary database.

[oracle@dg1 ~]$ dgmgrl
DGMGRL> connect sys
Password:
Connected.
DGMGRL> create configuration 'DGConfig1' as primary database is 'dg1'
> connect identifier is dg1;
Configuration "DGConfig1" created with primary database "dg1"
DGMGRL> add database 'dg2' as connect identifier is dg2;
Database "dg2" added
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration

Configuration - DGConfig1

  Protection Mode: MaxPerformance
  Databases:
    dg1 - Primary database
    dg2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Perform a switchover test:

DGMGRL> switchover to dg2;
Performing switchover NOW, please wait...
New primary database "dg2" is opening...
Operation requires shutdown of instance "dg1" on database "dg1"
Shutting down instance "dg1"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg1" on database "dg1"
Starting instance "dg1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg2"
DGMGRL> show configuration;

Configuration - DGConfig1

  Protection Mode: MaxPerformance
  Databases:
    dg2 - Primary database
    dg1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Standby Server

Confirm of the role switch:

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

Primary Server

The former primary database is now the new physical standby database:

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PHYSICAL STANDBY

Switch over to the former primary database:

DGMGRL> switchover to dg1;
Performing switchover NOW, please wait...
New primary database "dg1" is opening...
Operation requires shutdown of instance "dg2" on database "dg2"
Shutting down instance "dg2"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg2" on database "dg2"
Starting instance "dg2"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg1"
DGMGRL> show configuration

Configuration - DGConfig1

  Protection Mode: MaxPerformance
  Databases:
    dg1 - Primary database
    dg2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database dg2

Database - dg2

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   5 minutes 2 seconds
  Apply Lag:       5 minutes 51 seconds
  Real Time Query: OFF
  Instance(s):
    dg2

Database Status:
SUCCESS

Enable Fast Start Fail Over

The actual configuration is running in Max Performance mode and Fast Start Fail Over is currently disabled.

Primary Server

To configure FSFO, you must first enable flashback database on both the primary and standby databases. (Further reading: Introduction to Oracle Flashback Technology)

SQL> alter database flashback on;

Redo apply must be stopped to enable flashback database on the standby database:

DGMGRL> connect sys   
Password:
Connected.
DGMGRL> edit database 'dg2' set state='apply-off';
Succeeded.

Standby Server

SQL> alter database flashback on;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

Primary Server

Restart the Redo apply:

DGMGRL>  edit database 'dg2' set state='apply-on';
Succeeded.

The Observer

The server dg3 will act as the observer in the Fast-Start Fail Over configuration. The Oracle client binaries have been installed with administrator option. Confirm the connectivity with both the primary and the standby databases:

[oracle@dg3 ~]$ tnsping dg1

(Entry truncated)
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.101)(PORT=1521)))
OK (140 msec)
[oracle@dg3 ~]$ tnsping dg2

(Entry truncated)
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.102)(PORT=1521)))
OK (50 msec)
[oracle@dg3 ~]$

The prerequisites for FSFO have been met. So FSFO can be configured, enabled and started. The FSFO observer process will be started using the DGMGRL session and will be logged to a file named observer.log:

[oracle@dg3 admin]$ dgmgrl -logfile ./observer.log
DGMGRL for Linux: Version 11.2.0.1.0 - Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@dg1
Password:
Connected. 
DGMGRL> show fast_start failover

Fast-Start Failover: DISABLED

  Threshold:        30 seconds
  Target:           (none)
  Observer:         (none)
  Lag Limit:        30 seconds
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    (none)

DGMGRL> edit configuration set property FastStartFailoverLagLimit=60;
Property "faststartfailoverlaglimit" updated
DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> start observer;

Primary Server

DGMGRL> show configuration verbose

Configuration - DGConfig1

  Protection Mode: MaxPerformance
  Databases:
    dg1 - Primary database
    dg2 - (*) Physical standby database

  (*) Fast-Start Failover target

Fast-Start Failover: ENABLED

  Threshold:        30 seconds
  Target:           dg2
  Observer:         dg3.localdomain
  Lag Limit:        60 seconds
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configuration Status:
SUCCESS

Create a shell script and run the shell script in the background; create observer.sh (MOS ID: 1084681.1)

#!/bin/ksh
dgmgrl -echo -logfile /home/oracle/dgmgrl.log << EOF
connect sys/wissem@dg3
start observer
EOF
chmod +x observer.sh
./observer.sh &

Where do you place the Observer?

That all depends on your requirements, the observer can be placed in a third site separated from primary and standby sites with a HA observer; this means 2 hosts; one observer process running on the first host and the other on standby (not running) on the second host. Only one observer can run at any one time.

References

docs.oracle.com/cd/E11882_01/server.112/e25608/toc.htm

http://allthingsoracle.com/data-guard-physical-standby-database-best-practices-part-ii/

fast incremental backups on the primary database

In this post, I will demonstrate synchronization of a standby DB from an incremental RMAN backup. It might be needed in following scenarios:

1-The standby database is considerably lagging behind the primary . The copying and applying of archive logs from primary will be time consuming as it will apply both the COMMITED and the NON COMMITED transactions then will ROLLBACK the non committed transactions. Incremental backup will recover the standby database much faster than applying the archives as it will apply only the COMMITED transactions on the standby database.
2-Some archive logs on the primary which have not been applied to the standby have been lost. In such cases, you can create an incremental backup of the primary database containing changes since the standby database was last refreshed. This incremental backup can be applied to the standby database to synchronize it with a primary database.

Overview:

 
  1.  Stop redo transport on primary
  2.  Switch logs on primary – results in some archived logs on primary which have not been sent to standby
  3.  Rename newly generated archived logs on primary to simulate their loss
  4.  Restart redo transport – gives error as gap cannot be resolved due to missing logs
  5.  Create a control file for standby database on primary
  6.  Take incremental backup on primary starting from the SCN# of standby database
  7.  Copy the incremental backup to the standby host and catalog it with RMAN
  8.  Mount the standby database with newly created standby control file
  9.  Cancel managed recovery of standby database and apply incremental backup to the standby database
  10. Start managed recovery of standby database

Implementation:

– Check current log sequence on primary

PRI>archive log list;

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence     40

Next log sequence to archive   42

Current log sequence           42

– check that all the archived logs prior to the current log have been sent to standby

SQL> select max(sequence#) from v$archived_log;

 MAX(SEQUENCE#)

————–

41

– stop redo transport from primary (dg01)

DGMGRL> show database dg01

Database – dg01

 Role:            PRIMARY

Intended State:  TRANSPORT-ON

Instance(s):

dg01

 Database Status:

SUCCESS

 DGMGRL> edit database dg01 set state=’Transport-off’;

Succeeded.

DGMGRL> show database dg01

 Database – dg01

 Role:            PRIMARY

Intended State:  TRANSPORT-OFF

Instance(s):

dg01

Database Status:

SUCCESS

– switch log on primary

PRI>alter system switch logfile;

alter system switch logfile;

alter system switch logfile;

 archive log list;

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence     44

Next log sequence to archive   45

Current log sequence           46

– check that logs after sequence# 42 are not being transported to sby

SBY>select max(sequence#) from v$archived_log

MAX(SEQUENCE#)

————–

42

– Find out names of archived logs generated on primary which have not been transported to standby

PRI>set line 500

col name for a40

select sequence#, name from v$archived_log where sequence# &gt; 42;

 SEQUENCE# NAME

———- —————————————-

43 /u01/app/oracle/flash_recovery_area/DG01

/archivelog/2013_11_02/o1_mf_1_43_9780s1

ch_.arc

 44 /u01/app/oracle/flash_recovery_area/DG01

/archivelog/2013_11_02/o1_mf_1_44_9780tl

t0_.arc

 

45 /u01/app/oracle/flash_recovery_area/DG01

/archivelog/2013_11_02/o1_mf_1_45_9780tl

vo_.arc

– To simulate loss of archived logs on primary, rename them

PRI>ho mv /u01/app/oracle/flash_recovery_area/DG01/archivelog/2013_11_02/o1_mf_1_43_9780s1ch_.arc  /home/oracle/arch_43.arc

 ho mv /u01/app/oracle/flash_recovery_area/DG01/archivelog/2013_11_02/o1_mf_1_44_9780tlt0_.arc /home/oracle/arch_44.arc

 ho mv /u01/app/oracle/flash_recovery_area/DG01/archivelog/2013_11_02/o1_mf_1_45_9780tlvo_.arc /home/oracle/arch_45.arc

– Restart redo transport

DGMGRL>  edit database dg01 set state=’Transport-on’;

– check that gap in redo logs on standby cannot be resolved as some logs are missing on primary

DGMGRL> show database dg01

 Database – dg01

 Role:            PRIMARY

Intended State:  TRANSPORT-ON

Instance(s):

dg01

 Database Error(s):

 ORA-16783: cannot resolve gap for database dg02

 Database Status:

ERROR

 DGMGRL&gt; show configuration;

 Configuration – dgconfig1

 Protection Mode: MaxPerformance

Databases:

dg01 – Primary database

Error: ORA-16724: cannot resolve gap for one or more standby databases

 dg02 – Physical standby database

 Fast-Start Failover: DISABLED

 Configuration Status:

ERROR

– check current scn# of standby

SBY>select current_scn from v$database;

 CURRENT_SCN

———–

998647

– Create a standby control file:

PRI>ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘/home/oracle/standby_control.ctl’;

– copy the standby control file to standby host

PRI> !scp /home/oracle/standby_control.ctl node2:/home/oracle/standby_control.ctl

– Take incremental backup on primary starting from scn# of standby database
RMAN uses the selected SCN as the basis for this incremental backup. For all files being backed up, RMAN includes all data blocks that were changed at SCNs greater than or equal to the FROM SCN in the incremental backup.
Note:
• RMAN does not consider the incremental backup as part of a backup strategy at the source database. The backup is not suitable for use in a normal RECOVER DATABASE operation at the source database.

RMAN> Backup incremental from SCN 987005 database tag=’FOR_STANDBY’ format ‘/home/oracle/%d_%t_%s_%p’;

 Starting backup at 02-NOV-13

 channel ORA_DISK_1: starting full datafile backup set

channel ORA_DISK_1: starting piece 1 at 02-NOV-13

channel ORA_DISK_1: finished piece 1 at 02-NOV-13

piece handle=/home/oracle/DG01_830395300_22_1 tag=FOR_STANDBY comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

– Move the backup pieces to the standby host

PRI> !scp /home/oracle/DG01_830395300_22_1 node2:/home/oracle/DG01_830395300_22_1

– Catalog the Incremental Backup Files at the Standby Database

RMAN>  catalog backuppiece  ‘/home/oracle/DG01_830395300_22_1’;

 list backup tag FOR_STANDBY;

 List of Backup Sets

===================

 BS Key  Type LV Size       Device Type Elapsed Time Completion Time

——- —- — ———- ———– ———— —————

9       Incr    9.38M      DISK        00:00:00     02-NOV-13

BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: FOR_STANDBY

Piece Name: /home/oracle/DG01_830395300_22_1

Keep: NOLOGS             Until: 09-NOV-13

 List of Datafiles in backup set 9

– Find out names of current control files

SQL>  col value for a50

col name for a15

select name, value   from v$parameter where upper(name)= ‘CONTROL_FILES’;

 NAME            VALUE

————— ————————————————–

control_files   /u01/app/oracle/oradata/dg02/control01.ctl, /u01/a

pp/oracle/flash_recovery_area/dg02/control02.ctl

– Shutdown the standby database and rename the original control file of the standby database:

SBY> Shu immediate;

!mv /u01/app/oracle/oradata/dg02/control01.ctl /u01/app/oracle/oradata/dg02/control01.bak

!mv /u01/app/oracle/flash_recovery_area/dg02/control02.ctl /u01/app/oracle/flash_recovery_area/dg02/control02.bak

– Restore the standby control file we just copied from the primary

SBY> ! cp /home/oracle/standby_control.ctl /u01/app/oracle/oradata/dg02/control01.ctl

! cp /home/oracle/standby_control.ctl /u01/app/oracle/flash_recovery_area/dg02/control02.ctl

– Startup the Standby database with the new controlfile:

SBY> startup mount;

– Apply the Incremental Backup to the Standby Database
Use the RMAN RECOVER command with the NOREDO option to apply the incremental backup to the standby database. All changed blocks captured in the incremental backup are updated at the standby database, bringing it up to date with the primary database.

SBY> recover managed standby database cancel;

RMAN> RECOVER DATABASE from tag for_standby NOREDO;

Starting recover at 02-NOV-13

using channel ORA_DISK_1

channel ORA_DISK_1: starting incremental datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

destination for restore of datafile 00001: /u01/app/oracle/oradata/dg02/system01.dbf

destination for restore of datafile 00002: /u01/app/oracle/oradata/dg02/sysaux01.dbf

destination for restore of datafile 00003: /u01/app/oracle/oradata/dg02/undotbs01.dbf

destination for restore of datafile 00004: /u01/app/oracle/oradata/dg02/users01.dbf

destination for restore of datafile 00005: /u01/app/oracle/oradata/dg02/example01.dbf

channel ORA_DISK_1: reading from backup piece /home/oracle/DG01_830395300_22_1

channel ORA_DISK_1: piece handle=/home/oracle/DG01_830395300_22_1 tag=FOR_STANDBY

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

 

Finished recover at 02-NOV-13

Now your standby database is refreshed from the incremental backup.
You can start the Managed Recovery process on the standby DB:

SBY>recover managed standby database disconnect;

Media recovery complete.

You can now resume managed recovery at the standby. Any redo logs required at the standby with changes since those contained in the incremental are automatically requested from the primary and applied.

— check that SCN# of the standby database has advanced.

SBY>select current_scn from v$database;

 CURRENT_SCN

———–

1005729

DGMGRL> edit database dg01 set state=’transport-on’;

DGMGRL> show configuration;

 Configuration – dgconfig1

 Protection Mode: MaxPerformance

Databases:

dg01 – Primary database

dg02 – Physical standby database

 Fast-Start Failover: DISABLED

 Configuration Status:

SUCCESS

 

Reference: http://oracleinaction.com/using-rman-incremental-backups-refresh-standby-database/

 

 

Creating Oracle database

Manual Database Creation

Complete the following steps to create a database with the CREATE DATABASE statement.

The examples create a database named testdb1.

Step 1: Specify an Instance Identifier (SID)

export ORACLE_SID=testDB1

Oracle database profile = > .profile_db - it will load database environment
 Oracle grid profile => .profile_grid - it will load Oracle grid environment
 Step 2: Ensure That the Required Environment Variables Are Set

Make sure you have Oracle home pointing to Oracle home

export ORACLE_HOME=/u01/app/oracle/11.2.0/db_1

Step 3: Choose a Database Administrator Authentication Method

create and use password file for database connection
 ORAPWD FILE=filename [ENTRIES=numusers] [FORCE={Y|N}] [IGNORECASE={Y|N}]

orapwd file=orapwtestdb1 entries=5 force=y ignorecase=y

Step 4: Create the Initialization Parameter File
 under $ORACLE_HOME/dbs location , create inittestdb1.ora text file and put db_name=testdb1

$db_name=testdb1 > inittestdb1.ora

optional

control_files= '/u01/app/oradata/control01/control01.ctl',
 '/u02/app/oradata/control02/control02.ctl',
 '/u03/app/oradata/control03/control03.ctl'

memory_target= 1GB

The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter.
 SQL> select * from v$memory_target_advice order by memory_size;
 Step 5: (Windows Only) Create an Instance

oradim -NEW -SID sid -STARTMODE MANUAL -PFILE pfile
 where sid is the desired SID (for example testdb1) and pfile is the full path to the text initialization parameter file. This command creates the instance but does not start it.

Step 6: Connect to the Instance
 $ sqlplus /nolog
 SQL> CONNECT SYS AS SYSDBA

Step 7: Create a Server Parameter File
 CREATE SPFILE FROM PFILE;

Step 8: Start the Instance

STARTUP NOMOUNT
 At this point, the instance memory is allocated and its processes are started. The database itself does not yet exist.

Step 9: Issue the CREATE DATABASE Statement
 1) control_files location exists (if you mentioned in inittestdb1.ora file)
 all directories owned by Oracle user
 $mkdir -p /u01/app/oradata/control01/
 $mkdir -p /u01/app/oradata/control02/
 $mkdir -p /u01/app/oradata/control03/

2) redo log file locations exists
 /u01/logs/testdb1/redo01a.log
 /u02/logs/testdb1/redo01b.log

3) datafile locations exists
 /u01/app/oracle/oradata/testdb1/
 CREATE DATABASE testdb1
 USER SYS IDENTIFIED BY &sys_password
 USER SYSTEM IDENTIFIED BY &system_password
 LOGFILE GROUP 1 ('/u01/logs/testdb1/redo01a.log','/u02/logs/testdb1/redo01b.log') SIZE 100M BLOCKSIZE 512,
 GROUP 2 ('/u01/logs/testdb1/redo02a.log','/u02/logs/testdb1/redo02b.log') SIZE 100M BLOCKSIZE 512,
 GROUP 3 ('/u01/logs/testdb1/redo03a.log','/u02/logs/testdb1/redo03b.log') SIZE 100M BLOCKSIZE 512
 MAXLOGFILES 5
 MAXLOGMEMBERS 5
 MAXLOGHISTORY 1
 MAXDATAFILES 100
 CHARACTER SET AL32UTF8
 NATIONAL CHARACTER SET AL16UTF16
 EXTENT MANAGEMENT LOCAL
 DATAFILE '/u01/app/oracle/oradata/testdb1/system01.dbf' SIZE 325M REUSE
 SYSAUX DATAFILE '/u01/app/oracle/oradata/testdb1/sysaux01.dbf' SIZE 325M REUSE
 DEFAULT TABLESPACE users
 DATAFILE '/u01/app/oracle/oradata/testdb1/users01.dbf'
 SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
 DEFAULT TEMPORARY TABLESPACE tempts1
 TEMPFILE '/u01/app/oracle/oradata/testdb1/temp01.dbf'
 SIZE 20M REUSE
 UNDO TABLESPACE undotbs
 DATAFILE '/u01/app/oracle/oradata/testdb1/undotbs01.dbf'
 SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Step 10: Create Additional Tablespaces

CREATE TABLESPACE apps_tbs1 LOGGING
 DATAFILE '/u01/app/oracle/oradata/testdb1/apps01.dbf'
 SIZE 500M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
 EXTENT MANAGEMENT LOCAL;

CREATE TABLESPACE indx_tbs1 LOGGING
 DATAFILE '/u01/app/oracle/oradata/testdb1/indx01.dbf'
 SIZE 100M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
 EXTENT MANAGEMENT LOCAL;

Step 11: Run Scripts to Build Data Dictionary Views

Run the scripts necessary to build data dictionary views, synonyms, and PL/SQL packages, and to support proper functioning of SQL*Plus.

In SQL*Plus, connect to your Oracle Database instance with the SYSDBA administrative privilege:

@?/rdbms/admin/catalog.sql
 @?/rdbms/admin/catproc.sql
 @?/rdbms/admin/utlrp.sql
 In SQL*Plus, connect to your Oracle Database instance as SYSTEM user:

@?/sqlplus/admin/pupbld.sql
 Step 12: (Optional) Run Scripts to Install Additional Options

Step 13: Back Up the Database.

1) cold backup ( database down and take all control, password, init, datafiles backup)

2) rman > backup database
 Step 14: (Optional) Enable Automatic Instance Startup

srvctl add database -d testdb1 -o /u01/app/oracle/product/11.2.0/db_1

srvctl add listener -p TCP:1521 -o /u01/app/oracle/product/11.2.0/db_1

srvctl add asm [-l listener_name [-p spfile] [-d asm_diskstring]]

srvctl add asm -l testdb1listener
 check Oracle - restart components

srvctl config database
 srvctl config database -d testdb1 -a

srvctl config listener

Oracle Recovery

case 1) Loss of a NON-SYSTEM data file .

Problem:
One of the data file that belongs to the users tablespace is lost in a disk crash.

Solution:
When a non-system data file is lost, there are three methods by which the data file
can be recoverd.

A) the recover datafile command can be used. Here, datafile needs to be offline but
the database can be open or mounted.
B) the second method is to use the recover tablespace command, which required tablespace
to be offline and database to be open.
C) And the third method is to use the recover database command, this requires the database
to be mounted but not open, which means offline recovery needs to be performed.

$sqlplus ‘/as sysdba’
sql> selelect file_name,file_id from dba_data_files;

go to OS and remove example01.dbf
$rm example01.dbf

First method ) From the RMAN prompt:

RUN {

SQL’ALTER DATABASE DATAFILE 5 OFFLINE’;
RESTORE DATAFILE 5;
RECOVER DATAFILE 5;
SQL ‘ALTER DATABASE DATAFILE 5 ONLINE’;

}

Note > file_id we can track with below given query
> select file_id,file_name from dba_data_fiels where tablespace_name=’USERS’;

Second method )

RUN {

SQL ‘ALTER TABLESPACE example OFFLINE IMMEDIATE ‘;
RESTORE TABLESPACE example;
RECOVER TABLESPACE example;
SQL ‘ALTER TABLESPACE example ONLINE’;

}

#################################################################################################################

case 2) loss of a system data file.

Problem:
One afternoon a disk crashed, losing the system data file residing on the disk.

Solution:
A data file can be taken offline and the database started up, with the exception of
the data files belonging to the system tablespace. In this case, the data file that
was lost belonging to the system tablespace. the only solution here is to restore
the SYSTEM data file from the previous nights online backup and then perform database
recovery.

connect to target database and shutdown the database.

shutdown immediate’; #use shutdown abort if this fails

from RMAN prompt run below given command.

RMAN>run {

SQL ’STARTUP MOUNT’;
RESTORE TABLESPACE SYSTEM;
RECOVER TABLESPACE SYSTEM;
SQL’ALTER DATABASE OPEN’;

}

##################################################################################################################

case 3) Loss of all datafiles (except control and redolog files)

Problem:
whole database is lost.

solution:
If the controlfiles and online redo logs are still present a whole database recovery
can be achieved by running the following script

remove all *.dbf files at O/S level.

sql> shutdown immediate; # use abort if this fails.

sql>startup nomount

from RMAN propmt run below given script:

run {
startup mount; # or alter database mount;
restore database;
recover database;
alter database open;
}

##################################################################################################################

CASE 4) Loss of undo data file.

Problem Description:
====================

This is a recovery scenario in which a datafile in a undo
tablespace has been lost or damaged to a point that Oracle cannot recognize it
anymore. Trying to startup the database will result in ORA-1157, ORA-1110, and
possibly an operating system level error such as ORA-7360. Trying to shut down
the database in normal or immediate mode will result in ORA-1116, ORA-1110,
and possibly an operating system level error such as ORA-7368. v

Solution
=========

The approach to be followed depends on the specific scenario in which
the loss of the undo datafile is detected:

From Target database:
sql> select tablespace_name,file_id,file_name from dba_data_files;

remove undotbs01.dbf file at O/S level.

sql>shut abort
sql>startup mount

From RMAN prompt :

RMAN> run {
SQL’ ALTER DATABASE DATAFILE 3 OFFLINE’
RESTORE DATAFILE 3;
RECOVER DATAFILE 3;
SQL ‘ALTER DATABASE DATAFILE 3 ONLINE;
sql’alter database open’;
}

#################################################################################################################

CASE 5) Loss of CONTRLFILE

Problem Description:
——————–
One fine morning accidentally control file got deleted.
with out having control file if u try to start the database will through the following error.

ora-00205 error in identifying controlfile, check alert log for more info.

Solution :
———
in this case we have two options. Since all our data files and online log files are safe,
we can create a new control file using the create controlfile command. perform recovery if required,
and startup the database. Alternatively , we can use the backup controlfile. if you use a backup controlfile,
you need to perfrom media recovery. Also,Oracle will force you to use the “using backup controlfile” option.
Once recovery is done, you must start up database with the RESETLOGS option, and you have to take a complete
backup of your database.

sql> select * from v$controlfile;

at O/S level remove all controlfiles.
sql> shut abort.

start the instance without mounting the database

sql> STARTUP NOMOUNT

run {

restore congtrolfile from autobackup; #make sure CONFIGURE CONTROLFILE AUTOBACKUP on
alter database mount; # if it is off, just sumbit rman>restore controlfile.
recover database ;
alter database open resetlogs;

}

##################################################################################################################

case 6) loss of all complete database(controlfile+redologfiles+datafiles).

Problem: Loss of complete database.
——–

solution:
——–

we have to restore controlfile first, then we can mount the database.
after that we need to restore remaining datafiles and perform recovery.

sql> select file_name from dba_data_files;
sql>select member from v$logfile;
sql>show parameter control_file.

find out the location of c+r+d files and remove them at O/S level.

$ rm *. dbf
$ rm *.log
$ rm *.ctl

sql>shut abort;
sql>startup nomount;

run {

restore controlfile;
alter database mount;
restore database;
recover database;
alter database open resetlogs;

}

##################################################################################################################

case 7) Recovering a data file without a Backup.

Problem:
——–

You add a new datafile and users create some objects in it. before
you backup your database over the weekend, the new file get crashed.

when u try to access the damaged file, you may get below the following error.

ORA-01116 : error in opening database file 10
ORA-01110 : data file 10 :’/u01/app/oracle/oradata/test10.dbf’.

sql> select file_name,file_id ,tablespace_name from dba_data_Files;
sql>alter tablespace example add datafile ‘/u01/app/oracle/oradata/prod/example02.dbf’ size 100m;

remove the newly added file at O/S level, without having backup.

solution:
——–
from the RMAN prompt,
Take the damaged file into offline.And do the following steps.

RMAN>SQL’ALTER DATABASE DATAFILE 6 OFFLINE;
RMAN>SQL’ALTER DATABASE CREATE DATAFILE 6′;
RMAN>RECOVER DEATAFILE 6;
RMAN>SQL ‘ALTER DATABASE DATAFILE 6 ONLINE;

##################################################################################################################

CASE 8) Loss of Unarchived online log files.

Problem:
——–

A power surge caused the database to crash and also caused a media failure,
losing all the online log files. All the data files and the current controlfiles are intact.

Solution:
——–

Although the data files are fine after the crash, these files cannot be used because crash recovery cannot
be performed(since all online log files are lost). Forcing the database open in a situation like this can
cause database inconsistency. If any of the unarchived lof files are lost, crash recovery cannot be performed and,instead,media recovery needs to be performed. In this case, all the data files need to be restored from an fullbackup and rolled forward until the last available archived log file is applied.Since this is incomplete recovery we have to open the database by using “resetlogs” option.

remove both the log files at O/S level.
$rm *.log

sql>startup mount
sql>archive log list

Note: make a note of current log sequence number.

now connect to RMAN and submit the below given command.

RMAN> run {
set until sequence 41; (Note: set this number to one higher than the last archived log available)
restore database;
recover database;
alter database open resetlogs;
}

#################################################################################################################

CASE 9). Missing of archivelog file.

problem:
——–
One of the archive file lost at O/S level.
And we dont have backup of the missed archive file.

Solution:
——–
As to your missing archive log, just remember that those updates are lost forever,
and you can only roll-forward to the last set of contiguous archived redos…..

##################################################################################################################

CASE 10) Read Only Tablespace issues.

|
Recovering Read-Only Tablespaces.

Making a tablespace read-only prevents write operations on the data files in the tablespace, regardless of a user’s update privilege level. The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large, static portions of a database. Read-only tablespaces also provide a way to protecting historical data so that users cannot modify it. Because read-only tablespaces can never be updated, they can reside on CD-ROM or WORM (write once, read many) devices.

The method of recovering a read-only tablespace depends on the backups that are available and whether the tablespace was altered to read/write or read-only within the recovery period.

Case 1: The tablespace being recovered is read-only, and was read-only when the last backup occurred. In this case, you can simply restore the tablespace from the backup. There is no need to apply any redo information.

Case 2: The tablespace being recovered is read/write, but was read-only when the last backup occurred. In this case, you need to restore the tablespace from the backup and apply the redo information from the point when the tablespace was made read/write.

Case 3: The tablespace being recovered is read-only, but was read/write when the last backup occurred. You should always back up a tablespace after making it read-only to avoid this situation. However, if this does occur, you must restore the tablespace from the backup and recover up to the time that the tablespace was made read-only.

##################################################################################################################

case 11) Point in Time Recovery.

Recovery Manager (RMAN) automated tablespace point-in-time recovery (TSPITR) enables you to quickly
recover one or more tablespaces to a time that is different from that of the rest of the database.

RMAN TSPITR is most useful for recovering the following.

a) An erroneous DROP TABLE or TRUNCATE TABLE statement
b) A table that has become logically corrupted
c) An incorrect batch job or other DML statement that has affected only a subset of the database
d) A logical schema to a point different from the rest of the physical database when multiple schemas
exist in separate tablespaces of one physical database

Like a table export, RMAN TSPITR enables you to recover a consistent data set; however, the data set is the entire tablespace rather than one object.

1) Restores the specified tablespace backups
2) Recovers the specified tablespaces
3) Exports metadata from the auxiliary instance
4) Points the target database control file to the newly recovered datafiles
5) Imports metadata into the target database

Glossary of TSPITR Terminology:

Familiarize yourself with the following terms and abbreviations, which are used throughout this scenario.

TSPITR :
——–

Tablespace point-in-time recovery.

Auxiliary Instance :
——————–

The auxiliary instance used to recover the backup tablespaces. The database created by TSPITR
never has independent existence: it is only an intermediate work area .

Recovery Set:
————-

Tablespaces in the target database requiring TSPITR to be performed on them.
For example, if you need to recover tablespace users to a noncurrent time, then
users is the tablespace in the recovery set

Any other files required for TSPITR, including:
———————————————-

Backup control file
SYSTEM tablespace
Datafiles containing undo segments
Temporary tablespace (optional). A small space is required by Export for sort operations.
Online redo logs of the auxiliary database.

Problem:
——–
One of the user Accidently drop’s one table with purge option. and he dont have the backup of the table.

solution:
——–
Tablespace point in time recovery.

Automatic instance creation for RMAN TSPITR:
============================================

If a tablespace point-in-time recovery (TSPITR) is initiated with no reference to an auxillary instance RMAN now automatically creates an one. The auxillary instance configuration is based on that of the target database. As a result, any channels required for the restore operations must be present in the target database so they are configured correctly in the auxillary instance. The location of the datafiles for the auxillary instance are specified using the AUXILIARY DESTINATION clause shown below.

RMAN> RECOVER TABLESPACE example UNTIL TIME “to_date(‘2010-08-12:16:35:00′,’YYYY-MM-DD:HH24:MI:SS’)”
AUXILIARY DESTINATION ‘/u02/aux’;

The tablespace is taken offline, restored from a backup, recovered to the specified point-in-time in the auxillary instance and re-imported into the target database. The tablespace in the target database should then be backed up and the tablespace brought back online.

SQL> “ALTER TABLESPACE users ONLINE”;

On successful completion the auxillary instance will be cleaned up automatically. In the event of errors the auxillary instance is left intact to aid troubleshooting .

SECOND METHOD :
==============

PERFORMING RMAN TSPITR using your own auxiliary instance.

Preparing an auxiliary database:( at production side).
——————————————————

$cd $ORACLE_HOME/dbs
$cp initorcl.ora initaux.ora
$vi initaux.ora

—> control_files=’/u02/oradata/aux/cont.ctl’
—> db_file_name_convert =’/u01/app/oracle/oradata/orcl’ ‘/u02/oradata/aux’
—> log_file_name_convert=’/u01/app/oracle/oradata/orcl’ ‘/u02/oradata/aux’

:wq

$cd /u02
$mkdir -p oradata aux

$cd $ORACLE_HOME/dbs
$orapwd file=orapwaux password=oracle

$export ORACLE_SID=aux (Note : $ cd ORACLE_HOME/dbs
$sqlplus ‘/as sysdba’ $ rm lk* (if u find any lk files here)
sql> startup nomount

Preparing an auxiliary database:(at catalog side)
————————————————-

$cd $ORACLE_HOME/network/admin
$vi tnsnames.ora

#configure the alias aux1 which is pointing to target auxiliary instance)

aux1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = edpdr9p0.us.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = aux)
)
)

$ rman catlog rman1/rman1 target sys/oracle@rman1 auxiliary sys/oracle@aux log=show.log;

RMAN>RECOVER TABLESPACE USERS UNTIL TIME ‘2010/08/12:12:30:15’;

##################################################################################################################

case 12) RMAN Block Media Recovery.

What Is Block Corruption?

A corrupted data block is a block that is not in a recognized Oracle format, or whose contents are not internally consistent. Typically, corruptions are caused by faulty hardware or operating system problems. The Oracle database identifies corrupt blocks as either “logically corrupt” or “media corrupt.” If it is logically corrupt, then there is an Oracle internal error. Logically corrupt blocks are marked corrupt by the Oracle database after it detects the inconsistency. If it is media corrupt, then the block format is not correct; the information in the block does not make any sense after being read from disk.
You can repair a media corrupt block by recovering the block or dropping the database object that contains the corrupt block, or both. If media corruption is due to faulty hardware, the problem will not be completely resolved until the hardware fault is corrected.

Usually, the ORA-01578 error is the result of a hardware problem. If the ORA-01578 error is always returned with the same arguments, it is most likely a media corrupt block.
ORA-01578 Is returned to the session that issued the query being performed when the corruption was discovered.
it Always returns the absolute file number and block numberand the errror Appears in the alert.log file.

Background:
———–

In the practice, a data file is deliberately corrupted, and you see the results of it
as you query the affected table. You need to determine the location of the corruption and
correct the problem

1) In order to introduce corruption into the DEPARTMENTS table, you need to find out the OS
file name in which its data is stored, and also an OS file block ID that is in the
DEPARTMENTS table portion of the file. Query the DBA_SEGMENTS view to find the file
ID and block ID for the DEPARTMENTS segment. Then determine the name of the
associated OS file by querying the DBA_DATA_FILES view

Record these values here:
File ID: _____________
Block ID: _____________

a. log in to SQL*Plus as the SYS user with the password oracle, and as SYSDBA

$ sqlplus / as sysdba

SQL> select file_id, block_id from dba_extents where segment_name = ‘DEPARTMENTS’;

FILE_ID BLOCK_ID
———- ———-
5 49

SQL> select file_name from dba_data_files where file_id = 5;

FILE_NAME
————————————————————————
/u01/app/oracle/oradata/orcl/example01.dbf

Now to introduce corruption in example01.dbf file run the below given script at O/S level.
The order of the parameters to the script are the fully qualified file name, then the block number,
and then the block size

$vi corrupt.sh

#!/bin/bash
# Oracle Database 11g: rman backup and recovery.
# Oracle Server Technologies – Curriculum Development
# ***Training purposes only***
# ***Not appropriate for production use***

FILE=${1:?’Parameter 1 should be set to file name’}
BLOCK=${2:?’Parameter 2 should be set to the block to be corrupted’}
BLOCKSIZE=${3:?’Parameter 3 should be set to the database block size’}

dd of=$FILE bs=$BLOCKSIZE conv=notrunc seek=$BLOCK <<EOF
CORRUPT
EOF
dd of=$FILE bs=$BLOCKSIZE conv=notrunc seek=`expr 1 + $BLOCK` <<EOF
CORRUPT
EOF
dd of=$FILE bs=$BLOCKSIZE conv=notrunc seek=`expr 2 + $BLOCK` <<EOF
CORRUPT
EOF
dd of=$FILE bs=$BLOCKSIZE conv=notrunc seek=`expr 3 + $BLOCK` <<EOF
CORRUPT
EOF

2) $ ./corrupt.sh /u01/app/oracle/oradata/orcl/example01.dbf 49 8192

3) Flush the buffer cache so that any queries against the DEPARTMENTS table are forced to go
to the data file on disk. Then select all the columns of the DEPARTMENTS table and note
the error

SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;

SQL> SELECT * FROM hr.departments;

Following is the output of that statement. Note the error.

select * from hr.departments
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 51)
ORA-01110: data file 5: ‘/u01/app/oracle/oradata/orcl/example01.dbf’

4) Run the dbv utility to report all corruption in the example01.dbf file.
a. Enter the following at the OS command prompt to invoke DBVERIFY.

$ dbv file=/u01/app/oracle/oradata/orcl/example01.dbf blocksize=8192

5) View the alert log to see the details of the corruption.
a. Enter the following at the OS command prompt.

$ tail –50 $ORACLE_BASE/admin/orcl/bdump/alert_orcl.log.

6) Perform block media recovery using RMAN. Use the DBVERIFY output from the previous
step to determine the blocks that need to be recovered

$rman catalog rman1/rman1 target sys/oracle@rman1

RMAN>BLOCKRECOVER DATAFILE 5 BLOCK 49, 50, 51, 52;

7) Verify that the block recover operation was successful by again flushing the buffer cache
and querying the DEPARTMENTS table.

SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
SQL> SELECT * FROM hr.departments;

##################################################################################################################

CASE 13 ) How to take backup of database when few blocks are corrupted.
————————————————————-

Information about corrupted blocks we can track with the following view.

sql> select * from v$database_block_corruption;

say per example u may found 3 blocks are get corrupted ,still u want continue with backup

then we have to use below given steps.

RMAN> run {

set MAXCORRUPT for datafile 5 to 3;

backup datafile 5;

}.

##################################################################################################################

case 14 ) Encryption / password protection .

Password Encryption of Backups:
——————————-

Password encryption requires that the DBA provide a password when creating and restoring encrypted backups.
Restoring a password-encrypted backup requires the same password that was used to create the backup.
Password encryption is useful for backups that will be restored at remote locations, but which must remain secure in transit. Password encryption cannot be persistently configured

Start RMAN and connect to a target database and recovery catalog
To use password encryption,

$rman catalog rman1/rman1 target sys/oracle@rman1

RMAN > CONFIGURE ENCRYPTION FOR DATABASE ON;

RMAN > CONFIGURE ENCRYPTION ALGORITHM ‘AES128′; # default

RMAN>SET ENCRYPTION ON IDENTIFIED BY password ONLY
RMAN>BACKUP TABLESPACE USERS;

Restoring Data from Encrypted Backups:
————————————–

Password encrypted backups require the correct password to be entered before they can be restored.
You must enter the encryption password using the SET DECRYPTION command. Restore passwords cannot be persistently configured. If restoring from a set of backups that were created with different passwords, then specify all of the required passwords on the SET DECRYPTION command. RMAN will automatically use the correct password with each backup set.

go to o/s and remove the data file belongs to users.

RMAN> set decryption identified by password.
RMAN>SQL’ALTER TABLESPACE USERS OFFLINE’;
RMAN>RESTORE TABLESPACE USERS;
RMAN>RECOVER TABLESPACE USERS;
RMAN>SQL’ALTER TABLESPACE USERS ONLINE’;

##################################################################################################################

CASE 15) Compression:
————

The AS COMPRESSED BACKUPSET option of the BACKUP command allows RMAN to perform binary compression of backupsets. The resulting backupsets do not need to be uncompressed during recovery. It is most useful in the following circumstances.

1)You are performing disk-based backup with limited disk space.
2)You are performing backups across a network where network bandwidth is limiting.
3)You are performing backups to tape, CD or DVD where hardware compression is not available.

The AS COMPRESSED BACKUPSET option can be used explicitly in the backup command.

# Whole database and archivelogs.
RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

# Datafiles 1 and 5 only.
RMAN>BACKUP AS COMPRESSED BACKUPSET DATAFILE 1,5;

Alternatively the option can be defined using the CONFIGURE command:

# Configure compression.
RMAN>CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;

# Whole database and archivelogs.
RMAN>BACKUP DATABASE PLUS ARCHIVELOG;

#################################################################################################################

CASE 16) Throttling.

Managing Backup Duration and Throttling:
=======================================

The DURATION clause of the of the BACKUP command restricts the total time available for a backup to complete. At the end of the time window backup is interrupted with any incomplete backupsets discarded. All complete backupsets aer kept and used for future restore operations. The following examples show how it is used:

RMAN>BACKUP DURATION 2:00 TABLESPACE users;
RMAN>BACKUP DURATION 5:00 DATABASE PLUS ARCHIVELOGS.

#################################################################################################################

17) Flashback archive : (its an 11g feature).
===================

Flashback Data Archive: Overview
================================

A new database object, a flashback data archive is a logical container for storing historical information. It is stored in one or more tablespaces and tracks the history for one or more tables. You specify a retention duration for each flashback data archive. You can group the historical table data by your retention requirements in a flashback data archive. Multiple tables can share the same retention and purge policies.
With the Oracle Total Recall option, Oracle Database 11g has been specifically enhanced to track history with minimal performance impact and to store historical data in compressed form. This efficiency cannot be duplicated by your own triggers, which also cost time and effort to set up and maintain.

A flashback data archive is a historical data store. Oracle Database 11g automatically tracks and archives the data in tables enabled for Flashback Data Archive with a new Flashback Data Archive background process, FBDA. You use this feature to satisfy retention requirements that exceed the undo retention. Flashback data archives ensure that flashback queries obtain SQL-level access to the versions of database objects without getting a “snapshot too old error.”
A flashback data archive consists of one or more tablespaces or parts thereof. You can have multiple flashback data archives. Each is configured with a specific retention duration. Based on your retention duration requirements, you should create different flashback data archives—for example, one for all records that must be kept for one year, another for all records that must be kept for two years, and so on

Flashback Data Archive: Workflow
================================

1) Create the flashback data archive.
2) Optionally, specify the default flashback data archive.
3) Enable the flashback data archive.
4) View flashback data archive data.

Create the flashback data archive:
sql> CREATE FLASHBACK ARCHIVE fla1 TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR;

Enable history tracking for a table in the archive:
sql>ALTER TABLE inventory FLASHBACK ARCHIVE fla1

Configuring a Default Flashback Data Archive:
=============================================

Create a default flashback data archive:
sql > CREATE FLASHBACK ARCHIVE DEFAULT fla2 TABLESPACE tbs1 QUOTA 10G RETENTION 2 YEAR;

Enable history tracking for a table:
sql> ALTER TABLE stock_data FLASHBACK ARCHIVE;

Disable history tracking:
===========================

sql>ALTER TABLE stock_data NO FLASHBACK ARCHIVE;

Maintaining Flashback Data Archives:
====================================

Adding space

sql>ALTER FLASHBACK ARCHIVE fla1 ADD TABLESPACE tbs3 QUOTA 5G;

Changing retention time:

sql>ALTER FLASHBACK ARCHIVE fla1 MODIFY RETENTION 2 YEAR;

Dropping a flashback data archive:

sql>DROP FLASHBACK ARCHIVE fla1;

################################################################################################################
case 18) : Performing Regular Incremental Backups:
=======================================

RMAN>BACKUP INCREMENTAL LEVEL 0 DATABASE; (COMPLETE BACKUP)
RMAN>BACKUP INCREMENTAL LEVEL 1 DATABASE; (CUMULATIVE BACKUP)
RMAN>BACKUP INCREMENTAL LEVEL 2 DATABASE; (INCREMENTAL BACKUP)

Enabling Block Change Tracking:
===============================

This steps we have to perform at target database side:

sql>ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE ‘/u01/app/oracle/oradata/blk_cng_trk.dbf’;

To check weather block change tracking is enabled, query v$block_change_tracking.

similarly you can disable block change tracking using below given command.

sql>alter database disable block change tracking;

Performing Compressed Incremental Backups:
==========================================

RMAN>Backup as compressed backupset incremental level 0 database; (Complete Backup)
RMAN>backup as compressed backupset incremental level 1 database; (cumulative backup)
RMAN>backup as compressed backupset incremental level 2 database; (incremental backup)

##################################################################################################################
case 19) Keeping a backup forever:
=========================

Purpose:
——–

Use the keepOption subclause to specify the status of a backup or copy in relation to a retention policy.

Usage Notes:
————

RMAN does not consider backup pieces with the KEEP option when computing the backup retention policy. If available, RMAN uses these backups for disaster recovery restore operations, but their purpose is to produce a snapshot of the database that can be restored on another system for testing or historical usage.

Limitations and Restrictions:
—————————–

This option cannot be used to override the retention politcy for files stored in the flash recovery area.

The KEEP UNTIL clause never causes RMAN to consider a backup obsolete, if it is still required to satisfy the retention policy. KEEP UNTIL can cause backups to be kept longer than the retention policy would otherwise require, but never causes a backup to become obsolete sooner than the retention policy requires.

Keywords and Parameters:
————————-

Syntax Element Description
============== =============================================================

KEEP Overrides any configured retention policy for this backup or copy so that the backup is not obsolete. The BACKUP … KEEP command specifies a new retention time for this backup. Use this option to create a long-term backup, that is, a backup that want you to archive

FOREVER Specifies that the backup or copy never expires. You must use a recovery catalog when FOREVER is specified, because the backup records eventually age out of the control file.

UNTIL TIME = ‘date_string’ Specifies the date until which the backup or copy must be kept. You can either specify a specific time by using the current NLS_DATE_FORMAT, or a SQL date expression, such as ‘SYSDATE+365’

LOGS Specifies that all of the archived logs required to recover this backup or copy must remain available as long as this backup or copy is available

NOLOGS Specifies that this backup or copy cannot be recovered because the archived logs needed to recover this backup will not be kept. The only use for this backup or copy is to restore the database to the point in time that the backup or copy was taken. This is the only valid recoverability option when the database operates in NOARCHIVELOG mode. This option is not valid if the backup or copy is inconsistent

NOKEEP Specifies that the backup or copy expires according to the user’s retention policy. This is the default behavior if no KEEP option is specified .

Syntax :
———

RMAN> {
ALLOCATE CHANNEL c1 DEVICE TYPE DISK FORMAT ‘/u02/%U’;
backup database keep forever logs;

}

Note :but the problem is that the archivelogs to recover this backup at any time after the backup will NEVER be obsolete.

You could well try the NOLOGS option, but this requires you to take the database in the MOUNT state.

RMAN> {

ALLOCATE CHANNEL c1 DEVICE TYPE DISK FORMAT ‘/u02/%U’;
backup database keep forever nologs;

}

This is fine if you can stop your database. But you probably wants online backup. What’s next?
Ok, here is the way to go. You do your online backup, then you mark what you want to keep !

RMAN> {
ALLOCATE CHANNEL c1 DEVICE TYPE DISK FORMAT ‘/u02/%U’;
backup database tag kf;
}

Now I can mark my backup as keep forever.

RMAN>change backup tag kf keep forever;

#################################################################################################################

case 20) How to take backup without catalogue:
=====================================

Syntax with nocatalog;

$rman nocatalog target /

RMAN>BACKUP DATABASE PLUS ARCHIVELOG;

##################################################################################################################

Working with RMAN Stored Scripts in the Recovery Catalog:
=========================================================

Stored scripts offer an alternative to command files for managing frequently used sequences of RMAN commands.

Stored scripts can be global or local. A local stored script is associated with the target database to which RMAN is connected when the script is created, and can only be executed when you are connected to that target database. A global stored script can be run against any database registered in the recovery catalog, if the RMAN client is connected to the recovery catalog and a target database

Creating Stored Scripts: CREATE SCRIPT:

Make sure RMAN is connected to the right target database and the recovery catalog. Then run the CREATE SCRIPT command, as shown in this example.

CREATE SCRIPT full_backup
{
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

Examine the output. If no errors are displayed, then the script was successfully created and stored in the recovery catalog.

For a global script, the syntax is similar:

CREATE GLOBAL SCRIPT global_full_backup
{
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

You can also provide a COMMENT with descriptive information:
————————————————————

CREATE GLOBAL SCRIPT global_full_backup
COMMENT ‘use only with ARCHIVELOG mode databases’
{
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

Running Stored Scripts: EXECUTE SCRIPT:
—————————————

To run a stored script, connect to the target database and recovery catalog, and use EXECUTE SCRIPT. EXECUTE SCRIPT requires a RUN block, as shown:

RUN { EXECUTE SCRIPT full_backup; }

Displaying a Stored Script: PRINT SCRIPT
—————————————-

The PRINT SCRIPT command displays a stored script or writes it out to a file. With RMAN connected to the target database and recovery catalog, use the PRINT SCRIPT command as shown here.

PRINT SCRIPT full_backup;

##################################################################################################################

Using CROSSCHECK to Update the RMAN Repository:
===============================================

Crosschecks update outdated RMAN repository information about backups whose repository records do not match their physical status. For example, if a user removes archived logs from disk with an operating system command, the repository still indicates that the logs are on disk, when in fact they are not.

If the backup is on disk, then the CROSSCHECK command determines whether the header of the file is valid. If the backup is on tape, then the command simply checks that the backup exists. The possible status values for backups are AVAILABLE, UNAVAILABLE, and EXPIRED.

You can view the status of backups using the RMAN LIST command, or by queryingV$BACKUP_FILES or many of the recovery catalog views such as RC_DATAFILE_COPY or RC_ARCHIVED_LOG. Running a crosscheck updaets the RMAN repository so that all of these methods provide accurate information. For each backup in the RMAN repository, if the backup is no longer available, then RMAN marks it as EXPIRED. If it was EXPIRED and is now available, then RMAN marks it AVAILABLE.

RMAN>CROSSCHECK BACKUP;

This example shows how to crosscheck disk image copies only:

RMAN>CORSSCHECK COPY;

This example shows how to crosscheck backup sets only:

RMAN>CROSSCHECK BACKUPSET;

#################################################################################################################

Delete :
=========

To delete physical backups and copies as well as do the following:

a) Update their records in the target control file to status DELETED
b) Remove their records from the recovery catalog (if you use a catalog)

By default, DELETE displays a list of the files and prompts you for confirmation before deleting any file in the list, unless you are running a command file.

If you specify the EXPIRED option, then DELETE only removes files marked EXPIRED, that is, “not found,” by the CROSSCHECK command. Use the LIST command or query the recovery catalog views to determine which backups or copies are expired.

If you specify the OBSOLETE option, then DELETE removes files considered OBSOLETE, that is, “not needed,” by the retention policy or because it is orphaned. Specify a retention policy by using CONFIGURE RETENTION POLICY or the REDUNDANCY and RECOVERY WINDOW options on the DELETE command.

Examples:
========

run {
CROSSCHECK BACKUP OF TABLESPACE users
DEVICE TYPE disk COMPLETED BEFORE ‘SYSDATE-31’;
DELETE NOPROMPT EXPIRED BACKUP OF TABLESPACE users
DEVICE TYPE disk COMPLETED BEFORE ‘SYSDATE-31’;
}

Deleting Obsolete Backups: Example

The following example deletes backups and copies that are not needed to recover the database to a random point within the last week. RMAN also deletes archived redo logs that are no longer needed:

RMAN>DELETE NOPROMPT OBSOLETE RECOVERY WINDOW OF 7 DAYS;
RMAN>REPORT OBSOLETE;
RMAN>DELETE OBSOLETE;

Server Configuration

Server Configuration

  • Create the database
  • Determine and set sizing parameters for database structures
  • Create and manage temporary, permanent, and undo tablespaces
  • Stripe data files across multiple physical devices and locations
  • Configure the database environment to support optimal data access performance
  • Create and manage database configuration files
  • Create and manage bigfile tablespaces
  • Create and Manage a tablespace that uses NFS mounted file system file
  • Create and manage multiple network configuration files
  • Create and configure a listener
  • Configure the database instance to support shared server connections
  • Set up network tracing
  • Manage Oracle network processes
  • Configure the network environment to allow connections to multiple databases
  • Use configurationless connections
  • Use Grid Infrastructure to manage oracle databases and other resources

OCM Topics

Server Configuration

  • Create the database
  • Determine and set sizing parameters for database structures
  • Create and manage temporary, permanent, and undo tablespaces
  • Stripe data files across multiple physical devices and locations
  • Configure the database environment to support optimal data access performance
  • Create and manage database configuration files
  • Create and manage bigfile tablespaces
  • Create and Manage a tablespace that uses NFS mounted file system file
  • Create and manage multiple network configuration files
  • Create and configure a listener
  • Configure the database instance to support shared server connections
  • Set up network tracing
  • Manage Oracle network processes
  • Configure the network environment to allow connections to multiple databases
  • Use configurationless connections
  • Use Grid Infrastructure to manage oracle databases and other resources

Enterprise Manager Grid Control

  • Install and Patch Enterprise Manager Grid Control software
  • Configure the Enterprise Manager repository
  • Create Enterprise Manager Grid Control users
  • Use Enterprise Manager to modify a database configuration
  • Configure Enterprise Manager to modify a database availability
  • Create and manage jobs
  • Create and monitor alerts
  • Create notifications
  • Implement Grid Control and Database Control
  • Choose the appropriate tablespace type for the intended use
  • Create Scheduler jobs
  • Create schedules
  • Assign jobs to windows
  • Create programs
  • Create job classes
  • Install the Enterprise Manager Grid Control infrastructure
  • Deploy Enterprise Manager Grid Control agents
  • Configure Grid Control for business requirements

Managing Database Availability

  • Mantain recovery catalogs
  • Configure Recovery Manager
  • Use Recovery Manager to perform database backups
  • Use Recover Manager to perform complete database restore and recovery operations
  • Configure RMAN
  • Create different types of RMAN backups to cater for different performance and retention requirements
  • Set Flashback Database parameters
  • Configure a Fast Recovery Area
  • Perform various recovery operations using Flashback technology

Data Management

  • Manage Materialized Views to improve rewrite and refresh performance
  • Configure and manage distributed materialized views
  • Create and Manage encrypted tablespaces
  • Manage Transport of tablespaces across platforms
  • Configure a schema to support a star transformation query
  • Administer external tables
  • Implement Data Pump export and import jobs for data transfer
  • Implement Data Pump to and from remote databases
  • Configure and use parallel execution for queries
  • Use SQL*Loader
  • Administer, manage and tune parallel execution

Data Warehouse Management

  • Administer partitioned tables and indexes using appropriate methods and keys
  • Perform partition maintenance operations
  • Maintain indexes on a partitioned table
  • Implement securefile LOB
  • Create and manage LOB segments
  • Implement fine-grained access control
  • Create and manage contexts
  • Administer flashback data archive and schema evolution
  • Configure and Manage Streams for Capture, Propagation and Apply

Performance Management

  • Administer Resource Manager
  • Use Result Cache
  • Use multi column statistics
  • Gather stat istics on a specific table without invalidating cursors
  • Use partitioned indexes
  • Administer and tune schema object to support various access methods
  • Interpret execution plan
  • Use SQL tuning tools and features
  • Use SQL Tuning Advisor
  • Use SQL Access Advisor
  • Use SQL Performance Analyzer
  • Configure baseline templates
  • Use SQL Plan Management feature
  • Implement instance caging

Grid Infrastructure and ASM

  • Install Oracle Grid Infrastructure
  • Create ASM Disk Groups
  • Create and manage as ASM instance
  • Implement ASM failure groups
  • Creating ACFS File System
  • Start,Stop, Configure and Administer Oracle Grid Infrastructure

Real Application Clusters

  • Install the Oracle Database 11gR2 software
  • Configure ASM for the shared disks and create a clustered database
  • Configure archiving
  • Configure Services using both Manual and Policy Managed Methods

Data Guard

  • Create Physical Standby Database with real-time apply
  • Configure the data guard environment to reduce overheads of fast incremental backups on the primary database
  • Configure the Observer
  • Switchover and switch back
  • Configure connect time failover
  • Convert the standby to a snapshot standby
  • Configure archivelog deletion policy for the dataguard configuration