Tuesday, December 18, 2012

How to get ASM disks and their associated physical device.

Does any one have any idea about how to get the ASM physical device name? even though its not DBAs responsibility, having knowledge on this could potentially decrease the unix admin's dependency!

OracleASM utility provides listdisk option to get the ASM disk. $ORACLEASM LISTDISK.   OracleASM also provides "querydisk" option to get the device info ORACLEASM QUERYDISK -d . But using this querydisk feature you can't get the actual path of the ASM device. Instead, it will return only major and minor number of the device for that label name


oracle:@lnxracdev /oracle >/etc/init.d/oracleasm querydisk -d DATA_DISK01
Disk "DATA_DISK01" is a valid ASM disk on device [253, 17]

In order to get the ASM physical device name we can use blkid linux command. This command provides all the device names for the associated ASM disk.

oracle:@lnxracdev /oracle > blkid | grep -i DATA_DISK01
/dev/mapper/data_disk1: LABEL="DATA_DISK01" TYPE="oracleasm"
/dev/sdd: LABEL="DATA_DISK01" TYPE="oracleasm"
/dev/sdak: LABEL="DATA_DISK01" TYPE="oracleasm"
/dev/sdbr: LABEL="DATA_DISK01" TYPE="oracleasm"
/dev/sdcy: LABEL="DATA_DISK01" TYPE="oracleasm"

In the above example /dev/mapper/data_disk1 is the actual device name for DATA_DISK01 and below four /dev/sd* are multipath naming conventions.

Saturday, October 27, 2012

Migrate datafile from filesystem into ASM

In this post, I am going to demonstrate how to move datafile which was wrongly created on filesystem instead of ASM storage.

You may get a question here! what's the necessary to create a datafile in Filesystem instead of ASM diskgroup. well, while adding any datafile into ASM diskgroup there is a possibility that datafile might get create on Filesystem instead of ASM storage eventhough you specified appropripate Diskgroup name!

Still not clear!!! Read the below post.

Say for example, I have a tablesepace named USERS which has one datafile created on +DATAFILE_ORATST diskgroup.
  
FILE_ID FILE_NAME                                                 Size_MB
------- ------------------------------------------------------ ----------
      5 +DATAFILE_ORATST/ortst01/datafile/users.290.791711667    50.0625

Now I am going to add another 100MB datafile into USERS tablespace.
  
SQL> alter tablespace users add datafile ' +DATAFILE_ORATST(datafile)' size 100m;

Tablespace altered.
If you notice the above statement carefully you could see there is WHITE space before diskgroup name! if you give SPACE between singlequote and + symbol then Oracle will not recognize the diskgroup name which you specified in alter statement, eventhough you specified exact diskgroup name. then the new datafile will automatically get create in $ORACLE_HOME/dbs directory.
  
SQL>select file_id,file_name,bytes/1024/1024 "Size_MB" from dba_data_files where tablespace_name='USERS';
 
 FILE_ID FILE_NAME                                                     Size_MB
------- ---------------------------------------------------------- ----------
      5 +DATAFILE_ORATST/ortst01/datafile/users.290.791711667        50.0625
     10 /oracle/product/10.2.0.4.2/dbs/ +DATAFILE_ORATST(datafile)   100
Here the actual conversion happens. Make that datafile offline.
  
SQL> alter database datafile 10 offline;

Database altered.

SQL>select file_name,status,online_status from dba_data_files where tablespace_name='USERS';

FILE_NAME                                                   STATUS    ONLINE_
----------------------------------------------------------  --------- -------
+DATAFILE_ORATST/ortst01/datafile/users.290.791711667      AVAILABLE  ONLINE
/oracle/product/10.2.0.4.2/dbs/ +DATAFILE_ORATST(datafile) AVAILABLE  RECOVER
Connect to target database using RMAN , (Here catalog connection not required) Then take a copy of that datafile into new diskgroup format i.e +DATAFILE_ORATST followed by switch that datafile to copy to update the new datafile location in controlfile.
  
oracle:ortst01@orahst /oracle> rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Mon Oct 22 04:07:55 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: ORATST01 (DBID=85768739)

RMAN>backup as copy datafile 10 format '+DATAFILE_ORATST';

Starting backup at 22-OCT-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=269 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=274 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00010 name=/oracle/product/10.2.0.4.2/dbs/ +DATAFILE_ORATST(datafile)
output filename=+DATAFILE_ORATST/ORATST01/datafile/users.257.797314131 tag=TAG20121022T040850 recid=21 stamp=797314133
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 22-OCT-12

RMAN> switch datafile 10 to copy;

datafile 10 switched to datafile copy "+DATAFILE_ORATST/ORATST01/datafile/users.257.797314131"
Now wrongly created datafile got plugged into ASM storage, then do recovery on that datafile to bring it to consistent state and then open it.
  
SQL> select name from v$datafile where file#=10;

NAME
--------------------------------------------------------
+DATAFILE_ORATST/ortst01/datafile/users.257.797314131

SQL> recover datafile 10;
Media recovery complete.
SQL>
SQL> alter database datafile 10 online;

Database altered.

FILE_ID FILE_NAME                                                      Size_MB
------- ----------------------------------------------------------- ----------
      5 +DATAFILE_CTORATEST36/ortst01/datafile/users.290.791711667      50.0625
     10 +DATAFILE_CTORATEST36/ortst01/datafile/users.257.797314131        100

Now we can manually remove that old datafile located in $ORACLE_HOME/dbs directory.

I Hope this article helped to you. I am expecting your suggestions/feedback.

Thursday, October 11, 2012

How to run dbverify utility on datafiles which is in ASM storage.

Generally we will use dbvefiry utility to find any blocks has corrupted on datafile. but the files which is in ASM storage we need to include one more parameter named "userid" which accepts two arguments i.e username and password.
  
oracle:ortestdb@lnxrecov /oracle > dbv file=+DATA/ortestdb/datafile/users.411.795847253 blocksize=8192 userid=sys/*****

DBVERIFY: Release 10.2.0.4.0 - Production on Thu Oct 11 03:25:50 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

DBVERIFY - Verification starting : FILE = +DATA/ortestdb/datafile/users.411.795847253

DBVERIFY - Verification complete

Total Pages Examined         : 25600
Total Pages Processed (Data) : 0
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 6
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 47
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 25547
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Highest block SCN            : 0 (0.0)
oracle:ortestdb@lnxrecov /oracle >
For ASM files if we don't use userid parameter then we will endup with error like below,
  
oracle:ortestdb@lnxrecov /oracle > dbv file=+DATA/ortestdb/datafile/users.411.795847253 blocksize=8192

DBVERIFY: Release 10.2.0.4.0 - Production on Thu Oct 11 03:36:46 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

DBV-00008: USERID must be specified for OSM files

Tuesday, September 18, 2012

Convert datafile from one diskgroup into another.

Few days back, one of my colleague made mistake while adding new datafile into ASM diskgroup. New datafile was wrongly added into +DATAFILE diskgroup instead of +DATAFILE_PRODDB. Then I followed below steps to change that datafile into +DATAFILE_PRODDB diskgroup.

Get the datafile info which was wrongly added.
SQL> select file_id,file_name,online_status from dba_Data_Files where tablespace_name='TOOLS';

   FILE_ID FILE_NAME                                          ONLINE_
---------- -------------------------------------------------- -------
         4 +DATAFILE/orreco42/datafile/tools.532.793507573     ONLINE

Make that datafile offline.
SQL> alter database datafile 4 offline;

Database altered.

Now connect to RMAN and make image copy of that datafile into new diskgroup.
oracle:orreco42@lnxora01 /oracle > rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Tue Sep 18 07:10:27 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: ORRECO42 (DBID=4078254412)

RMAN> backup as copy datafile 4 format '+DATAFILE_PRODDB';

Starting backup at 18-SEP-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=64 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=+DATAFILE/orreco42/datafile/tools.532.793507573
output filename=+DATAFILE_PRODDB/orreco42/datafile/tools.456.794301047 tag=TAG20120918T071045 recid=2 stamp=794301047
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:02
Finished backup at 18-SEP-12

RMAN> exit

Recovery Manager complete.

Then we need to update the controlfile with the new datafile location. the new file name of datafile 4 can be obtained from above rman command, see the output filename .

while doing this change controlfile will be updated at the same time old file name which was created in +DATAFILE diskgroup will also be removed automatically.
SQL> alter database rename file '+DATAFILE/orreco42/datafile/tools.532.793507573' to '+DATAFILE_PRODDB/orreco42/datafile/tools.456.794301047';
Database altered.

Now connect to RMAN and switch datafile 4 to copy (i.e image copy)
RMAN> switch datafile 4 to copy;

using target database control file instead of recovery catalog
datafile 4 switched to datafile copy "+DATAFILE_PRODDB/orreco42/datafile/tools.456.794301047"

RMAN>

RMAN> recover datafile 4;

Starting recover at 18-SEP-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=54 devtype=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:09

Finished recover at 18-SEP-12

RMAN>

RMAN> sql 'alter database datafile 4 online';

sql statement: alter database datafile 4 online


Before to make the datafile online we need to recover. This is because we brought the datafile 4 offline so if we open the datafile without recover it will get fail saying, datafile 4 needs media recovery. (At this stage datafile 4 will be in recvoer mode so we need to RECOVER it to make the datafile 4 into consistant state.

Now you can see datafile 4 is renamed into appropriate diskgroup.
SQL> select file_id,file_name,online_status from dba_Data_Files where tablespace_name='TOOLS';

   FILE_ID FILE_NAME                                     ONLINE_
---------- --------------------------------------------- -------
         4 +DATAFILE_PRODDB/orreco42/datafile/tools.456.794301047   ONLINE


I Hope this article helped to you. I am expecting your suggestions/feedback.

Monday, July 30, 2012

My First Perl Program

Started learning Perl scripting language.

This is my own first perl script.

#!:C:\Perl\bin

print "What is the sum of 10+20?\n";
$your_ans=;
chop($your_ans);
$org_val=30;
until($your_ans == $org_val)
{
print "Wrong, Keep try!\n";
$your_ans=;
chop($your_ans);
}
print "Yes, you got it.";

Output:

C:\siva>perl test.pl

What is the sum of 10+20?
50
Wrong, Keep try!
60
Wrong, Keep try!
30
Yes, you got it.

I'll try to give more info on perl scripting in future :)

Tuesday, June 19, 2012

Renaming ASM Diskgroup

Note: Below example described here are for Renaming EMPTY diskgroup.

While working on Filesystem to ASM migraion project case, I had created necessary diskgroup as mentioned below.
 NAME                         STATE
---------------------------- ------------
ARCHLOG                      MOUNTED
DATAFILE_ORAPROD6            MOUNTED
DATAFILE_ORAPRD7             MOUNTED
LOG1                         MOUNTED
LOG2                         MOUNTED
Before to start the actual migration found DATAFILE_ORAPRD7 dg has typo error it should be like DATAFILE_ORAPRD7 so Instead of dropping the diskgroup I decided to go for rename it. Below are the steps which I followed to rename Diskgroup in 11.2 grid environment. First we need to dismount the DG which we are going to rename. Stop ASM Diskgroup.
oracle:+ASM@linuxprd5 /oracle > asmcmd umount DATAFILE_ORAPRD7
Check the Status of Diskgroup.
oracle:+ASM@linuxprd5 /oracle > crsctl stat res ora.DATAFILE_ORAPRD7.dg
NAME=ora.DATAFILE_ORAPRD7.dg
TYPE=ora.diskgroup.type
TARGET=OFFLINE
STATE=OFFLINE
Now we are good to go for Renaming Diskgroup.
Inorder to Rename any diskgroup in 11.2 gird we can use "renamedg" command.
oracle:+ASM@linuxprd5 /oracle > renamedg phase=both dgname=DATAFILE_ORAPRD7 newdgname=DATAFILE_ORAPROD7 verbose=true

Parsing parameters..

Parameters in effect:

         Old DG name       : DATAFILE_ORAPRD7
         New DG name          : DATAFILE_ORAPROD7
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=DATAFILE_ORAPRD7 newdgname=DATAFILE_ORAPROD7 verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA_DISK1 with disk number:0 and timestamp (32971243 -722582528)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA_DISK2 with disk number:1 and timestamp (32971243 -722582528)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA_DISK1 with disk number:0 and timestamp (32971243 -722582528)
Identified disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DATA_DISK2 with disk number:1 and timestamp (32971243 -722582528)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for ORCL:DATA_DISK1
Modifying the header
Looking for ORCL:DATA_DISK2
Modifying the header
Completed phase 2
Terminating kgfd context 0x2b4c8410a0a0
oracle:+ASM@linuxprd5 /oracle >
Connect to ASM Instance and verify the new diskgroup name.
oracle:+ASM@linuxprd5 /oracle > sqlplus / as sqlasm
SQL*Plus: Release 11.2.0.3.0 Production on Tue Jun 19 03:17:26 2012


Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option


SQL> select name ,state from v$asm_diskgroup;
NAME                          STATE
----------------------------- ----------------
DATAFILE_ORAPROD7             DISMOUNTED  ---> Diskgroup got Renamed Now.
DATAFILE_ORAPROD6             MOUNTED
ARCHLOG                       MOUNTED
LOG1                          MOUNTED
LOG2                          MOUNTED


SQL> alter diskgroup DATAFILE_ORAPROD7 mount;
Diskgroup altered.


SQL> select name ,state from v$asm_diskgroup;
NAME                          STATE
----------------------------- -------------
DATAFILE_ORAPROD7             MOUNTED
DATAFILE_ORAPROD6             MOUNTED
ARCHLOG                       MOUNTED
LOG1                          MOUNTED
LOG2                          MOUNTED

After Renaming the diskgroup we need to remove old diskgroup name from the cluster resource also.
oracle:+ASM@linuxprd5 /oracle > crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCHLOG.dg
               ONLINE  ONLINE       linuxprd5
ora.DATAFILE_ORAPROD6.dg
               ONLINE  ONLINE       linuxprd5
ora.DATAFILE_ORAPRD7.dg
               OFFLINE OFFLINE      linuxprd5   --> Still this resource is not removed from the cluster but its in OFFLINE.
ora.DATAFILE_COORAPROD73A.dg
               ONLINE  ONLINE       linuxprd5
ora.LOG1.dg
               ONLINE  ONLINE       linuxprd5
ora.LOG2.dg
               ONLINE  ONLINE       linuxprd5
ora.asm
               ONLINE  ONLINE       linuxprd5                 Started
ora.ons
               OFFLINE OFFLINE      linuxprd5
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       linuxprd5
ora.diskmon
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  ONLINE       linuxprd5
Below are the detailed info for the new and old diskgroup resource status.
oracle:+ASM@linuxprd5 /oracle > crsctl stat res ora.DATAFILE_ORAPRD7.dg ora.DATAFILE_ORAPROD7.dg
NAME=ora.DATAFILE_ORAPRD7.dg
TYPE=ora.diskgroup.type
TARGET=OFFLINE
STATE=OFFLINE


NAME=ora.DATAFILE_ORAPROD7.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE on linuxprd5
To remove diksgroup from the cluster resource we can use SRVCTL command.
oracle:+ASM@linuxprd5 /oracle > srvctl remove diskgroup -g DATAFILE_ORAPRD7


oracle:+ASM@linuxprd5 /oracle > crsctl stat res -t
-------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
-------------------------------------------------------------------------------
Local Resources
-------------------------------------------------------------------------------
ora.ARCHLOG.dg
               ONLINE  ONLINE       linuxprd5
ora.DATAFILE_ORAPROD6.dg
               ONLINE  ONLINE       linuxprd5
ora.DATAFILE_ORAPROD7.dg
               ONLINE  ONLINE       linuxprd5
ora.LOG1.dg
               ONLINE  ONLINE       linuxprd5
ora.LOG2.dg
               ONLINE  ONLINE       linuxprd5
ora.asm
               ONLINE  ONLINE       linuxprd5                 Started
ora.ons
               OFFLINE OFFLINE      linuxprd5
-------------------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       linuxprd5
ora.diskmon
      1        OFFLINE OFFLINE
ora.evmd
      1        ONLINE  ONLINE       linuxprd5
How do I rename the ASM disk group with datafile in it?

Before to rename any diskgroup we need to OFFLINE the datafiles which are present in that diskgroup. Then do the rename as described above then all those datafiles needs to be renamed like below, to update the new location in the controlfile.

"alter database datafile file# 'old_path' to 'new_path';

then do recover on those datafiles and open it.

Monday, June 18, 2012

Stopping ASM instance on stand alone database.

Couple of days back, I got a project case to migrate database from Filesystem to ASM, After I installed Grid infrastructure binaries for stand alone server. I created ASM instance manually, then I created necessary diskgroup as below.
NAME                         STATE
---------------------------- -------
ARCHLOG                      MOUNTED
DATAFILE_ORAPROD6            MOUNTED
DATAFILE_ORAPROD7            MOUNTED
LOG1                         MOUNTED
LOG2                         MOUNTED
Before to create above said diskgroups, I am able to start and stop ASM instance but After the diskgroup creation I was not able to stop ASM instance using SRVCTL command. When I tried to stop ASM instance it got failed with below error.
oracle:+ASM@linuxora15 /oracle > srvctl stop asm
PRCR-1065 : Failed to stop resource ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.LOG2.dg', but the force option was not specified
Then I obsorved that in single instance configuration if any of diskgroup got mounted we can't stop ASM instance untill unless we dismount all diskgroup for that Instance. this is because, from 11g R2 onwards, each diskgroup will have separate service running on each node, after ASM instance started.
oracle:+ASM@linuxora15 /oracle > crs_stat -t
Name           Type           Target    State     Host
-----------------------------------------------------------
ora.ARCHLOG.dg ora....up.type ONLINE    ONLINE    linuxora15
ora....ROD6.dg ora....up.type ONLINE    ONLINE    linuxora15
ora....ROD7.dg ora....up.type ONLINE    ONLINE    linuxora15
ora.LOG1.dg    ora....up.type ONLINE    ONLINE    linuxora15
ora.LOG2.dg    ora....up.type ONLINE    ONLINE    linuxora15
ora.asm        ora.asm.type   ONLINE    ONLINE    linuxora15
ora.cssd       ora.cssd.type  ONLINE    ONLINE    linuxora15
ora.diskmon    ora....on.type OFFLINE   OFFLINE
ora.evmd       ora.evm.type   ONLINE    ONLINE    linuxora15
ora.ons        ora.ons.type   OFFLINE   OFFLINE
Since all diskgroups on this ASM instance are in ONLINE status, I was not able to stop the instance. In this case to stop ASM instance, follow below procedure.
SQL> select name,state from v$asm_diskgroup;
NAME                         STATE
---------------------------- -----------
ARCHLOG                      MOUNTED
DATAFILE_ORAPROD6            MOUNTED
DATAFILE_ORAPROD7            MOUNTED
LOG1                         MOUNTED
LOG2                         MOUNTED

SQL> alter diskgroup all dismount; --> This will dismount all DG on this instance.
Diskgroup altered.
Note : Before to stop diskgroup you should connect to ASM instance with SYSASM privilege
SQL> select name,state from v$asm_diskgroup;
NAME                         STATE
---------------------------- -----------
DATAFILE_ORAPROD6            DISMOUNTED
DATAFILE_ORAPROD7            DISMOUNTED
LOG2                         DISMOUNTED
LOG1                         DISMOUNTED
ARCHLOG                      DISMOUNTED

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option
oracle:+ASM@linuxora15 /oracle >
oracle:+ASM@linuxora15 /oracle > srvctl stop asm

oracle:+ASM@linuxora15 /oracle > crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.ARCHLOG.dg ora....up.type OFFLINE   OFFLINE
ora....OD69.dg ora....up.type OFFLINE   OFFLINE
ora....D73A.dg ora....up.type OFFLINE   OFFLINE
ora.LOG1.dg    ora....up.type OFFLINE   OFFLINE
ora.LOG2.dg    ora....up.type OFFLINE   OFFLINE
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora.cssd       ora.cssd.type  ONLINE    ONLINE    linuxora15
ora.diskmon    ora....on.type OFFLINE   OFFLINE
ora.evmd       ora.evm.type   ONLINE    ONLINE    linuxora15
ora.ons        ora.ons.type   OFFLINE   OFFLINE
oracle:+ASM@linuxora15 /oracle >

Tuesday, May 29, 2012

Migrating Database from Filesystem to ASM

Couple of days back we got a project case to migrate database from Filesystem to ASM storage. I followed below procedure to achieve this.

Before to begin this work binaries for ASM should be installed and necessary diskgroups should be created to accommodate the space on new storage.

Note : Make sure that your diskgroup size (for Datafiles) should be grater than the total database size. Below are the steps needs to be follow for this migration.

Connect to database and check the datafile location,controlfile name, logfile members and current log sequence#

[oracle@asm ~]$ sqldba
SQL*Plus: Release 11.2.0.3.0 Production on Tue May 29 08:04:25 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select name from v$datafile;

NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/abcd/system01.dbf
/u01/app/oracle/oradata/abcd/sysaux01.dbf
/u01/app/oracle/oradata/abcd/undotbs01.dbf
/u01/app/oracle/oradata/abcd/users01.dbf
/u01/app/oracle/oradata/abcd/example01.dbf

SQL> archive log list

Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/arch_dest
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3

SQL> select * from v$Log;

GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- ---------- 
1 1 1 52428800 512 1 YES INACTIVE
2 1 2 52428800 512 1 YES INACTIVE
3 1 3 52428800 512 1 NO CURRENT

SQL> select name from v$controlfile;

NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/abcd/control01.ctl
/u01/app/oracle/oradata/abcd/control02.ctl

SQL> show parameter db_create_file_des

NAME TYPE VALUE
------------------------------------ ----------- ----------------
db_create_file_dest string
Change the CONTROL_FILES and DB_CREATE_FILE_DEST parameters to point ASM diskgroups.
SQL> ALTER SYSTEM SET control_files='+DATAFILE','+ARCHLOG' scope=spfile;
System altered.

SQL> ALTER SYSTEM SET db_create_file_dest='+DATAFILE' SCOPE=spfile;
System altered.

SQL> create pfile='/tmp/pfile_abcd.ora' from spfile
File created.

Need to make sure that we have sufficient space on those diskgroups to hold existing database.
Before to create ASM Diskgroups we should have done Capacity Planing. According to that panning diskgroups should be created otherwise during the conversion time it might get fail saying diskgroup space exhausted!


Now take a copy of the whole database into new diskgroup format.
RMAN> backup as copy database format '+DATAFILE';

Starting backup at 29-MAY-12

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=135 device type=DISK

channel ORA_DISK_1: starting datafile copy

input datafile file number=00001 name=/u01/app/oracle/oradata/abcd/system01.dbf

output file name=+DATAFILE/abcd/datafile/system.270.784541581 tag=TAG20120529T081300 RECID=2 STAMP=784541600

As you can see here, All files from filesystem are getting pushed into ASM storage.

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25

channel ORA_DISK_1: starting datafile copy

input datafile file number=00002 name=/u01/app/oracle/oradata/abcd/sysaux01.dbf

output file name=+DATAFILE/abcd/datafile/sysaux.268.784541607 tag=TAG20120529T081300 RECID=3 STAMP=784541622

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25

channel ORA_DISK_1: starting datafile copy

input datafile file number=00005 name=/u01/app/oracle/oradata/abcd/example01.dbf

output file name=+DATAFILE/abcd/datafile/example.265.784541631 tag=TAG20120529T081300 RECID=4 STAMP=784541637

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting datafile copy

input datafile file number=00003 name=/u01/app/oracle/oradata/abcd/undotbs01.dbf

output file name=+DATAFILE/abcd/datafile/undotbs1.264.784541639 tag=TAG20120529T081300 RECID=5 STAMP=784541639

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting datafile copy

input datafile file number=00004 name=/u01/app/oracle/oradata/abcd/users01.dbf

output file name=+DATAFILE/abcd/datafile/users.269.784541641 tag=TAG20120529T081300 RECID=6 STAMP=784541641

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01

Finished backup at 29-MAY-12

As you can see the above output, while taking copy of the database, each datafile copies are getting restored into ASM diskgroup.

Note : This can be done before to begin the outage, so that we can reduce the outage number of outage hours to minutes

After the migration if you want to open the database without RESETLOGS then no need to take backup of controlfile.

Need to bounce the database now.
While starting up the db place the instance in NOMOUNT state
RMAN> shutdown immediate

database closed

database dismounted

Oracle instance shut down

RMAN> startup nomount

connected to target database (not started)
Oracle instance started
Total System Global Area 521936896 bytes

Fixed Size 2229944 bytes

Variable Size 159385928 bytes

Database Buffers 352321536 bytes

Redo Buffers 7999488 bytes

Now restore the controlfiles from any one of the multiplexed location
RMAN>restore controlfile from '/u01/app/oracle/oradata/abcd/control01.ctl';
Starting restore at 29-MAY-12

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=135 device type=DISK

channel ORA_DISK_1: copied control file copy

output file name=+DATAFILE/abcd/controlfile/current.256.784541557

output file name=+ARCHLOG/abcd/controlfile/current.256.784541557

Finished restore at 29-MAY-12

Here you could see Controlfiles are restored into ASM instance

RMAN> sql 'alter database mount';

sql statement: alter database mount

released channel: ORA_DISK_1
As you notice here, during the restoration process control files are restoring into ASM diskgroups based on the location specified in parameter file.

At this time a copy of each datafile got created on ASM diskgroup so we need to update the controlfile with the new location of each datafile. To achieve this we can switch the entire database to copy (image copy) and recover the database.

RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATAFILE/abcd/datafile/system.270.784541581"

datafile 2 switched to datafile copy "+DATAFILE/abcd/datafile/sysaux.268.784541607"

datafile 3 switched to datafile copy "+DATAFILE/abcd/datafile/undotbs1.264.784541639"

datafile 4 switched to datafile copy "+DATAFILE/abcd/datafile/users.269.784541641"

datafile 5 switched to datafile copy "+DATAFILE/abcd/datafile/example.265.784541631"


RMAN> recover database;

Starting recover at 29-MAY-12

using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 3 is already on disk as file /u01/app/oracle/oradata/abcd/redo03.log

archived log file name=/u01/app/oracle/oradata/abcd/redo03.log thread=1 sequence=3

media recovery complete, elapsed time: 00:00:00

Finished recover at 29-MAY-12
Now we can open the database WITHOUT RESETLOGS.
RMAN> sql 'alter database open';

sql statement: alter database open
Now connect to the database and verify the datafile locations.
[oracle@asm ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Tue May 29 08:15:13 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
--------- --------------------
ABCD READ WRITE

SQL> select name from v$datafile;

NAME
-----------------------------------------------------------------
+DATAFILE/abcd/datafile/system.270.784541581

+DATAFILE/abcd/datafile/sysaux.268.784541607

+DATAFILE/abcd/datafile/undotbs1.264.784541639

+DATAFILE/abcd/datafile/users.269.784541641

+DATAFILE/abcd/datafile/example.265.784541631
Now all datafiles has been migrated into ASM storage! Till now we migrated only Datafiles. but we have to migrate tempfiles and online Redo logs too. for temporary files we can add it manually.
SQL> select member from v$logfile;

MEMBER
--------------------------------------------------
/u01/app/oracle/oradata/abcd/redo03.log

/u01/app/oracle/oradata/abcd/redo02.log

/u01/app/oracle/oradata/abcd/redo01.log
Migrating redo logs into ASM is simple way, just add more redolog groups and then drop existing redolog groups which are in Filesystem. Before to add redolog groups make sure you set DB_CREATE_ONLINE_LOG_DEST_n parameter and it should point to ASM diskgroups.
SQL> show parameter db_create_online_log_dest

NAME                                 TYPE       VALUE
------------------------------------ ---------- ------
db_create_online_log_dest_1          string     +LOG1
db_create_online_log_dest_2          string     +LOG2
 
SQL> alter database add logfile group 4;
Database altered.

SQL> alter database add logfile group 5;
Database altered.

SQL> alter database add logfile group 6;
Database altered.

SQL> alter database drop logfile group 2;
Database altered.

SQL> alter database drop logfile group 3;
Database altered.

SQL> alter system switch logfile;
System altered.

SQL> alter system checkpoint;
System altered.

Manually switching logfiles because Current Group can not be dropped.

SQL> select * from v$LOg;

GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------

1 1 1 52428800 512 1 YES INACTIVE 1046676 29-MAY-12 1047525 29-MAY-12

4 1 2 104857600 512 1 NO CURRENT 1047525 29-MAY-12 2.8147E+14

5 1 0 104857600 512 1 YES UNUSED 0 0

6 1 0 104857600 512 1 YES UNUSED 0 0

SQL> alter database drop logfile group 1;
Database altered.

SQL> select * from v$Log;

GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------

4 1 2 104857600 512 1 NO CURRENT 1047525 29-MAY-12 2.8147E+14

5 1 0 104857600 512 1 YES UNUSED 0 0

6 1 0 104857600 512 1 YES UNUSED 0 0


SQL> select member from v$Logfile;

MEMBER
---------------------------------------------------

+DATAFILE/abcd/onlinelog/group_4.266.784542027

+DATAFILE/abcd/onlinelog/group_5.267.784542035

+DATAFILE/abcd/onlinelog/group_6.271.784542041

Now the entire database got migrated to ASM storage and opened the database WITHOUT RESETLOGS!!

I Hope this article helped to you. I am expecting your suggestions/feedback.

Wednesday, April 18, 2012

11g R2 RAC Grid Installation

Recently I installed Oracle Real Application Clusters 11g R2 on Enterprise Linux 5.7, Many features has been introduced in this 11g R2 version.


First thing is ASM is shipped along with clusterware software, for this Oracle named as Gird software this eliminate the need of having separate ASM_HOME. Both CRS and ASM will be in same home usually we name it as GIRD_HOME


Unlike 10g R2, Gird installation takes care of most of the things on its own, like configuring SSH,Installes cvuqdisk-1.0.9-1 package (will be installed automatically while running root.sh script on grid_home).
This cvuqdisk-1.0.9-1 rpm is mandatory to check the health of Harrdware and Operating System (HWOS) this should be done before to begin the grid installation.

$runcluvfy.sh stage -pre HWOS -n rac1,rac2 (here rac1 & rac2 are node name)

Purpose of running above command is to ensure that the shared storage is accessable by all nodes in the cluster.
same way we can check the cluster health by using below command.

$runcluvfy.sh stage -pre crsinst -n rac1,rac2 --> runcluvfy.sh will be executed from Grid binary

crsinst -- checks node rechability,physical memory,list of required RPM installed on host,User equivalence,Node connectivity,
Since SSH are configured during grid installation, If you are running cluster verification utility precheck before to begin the grid installation it will definitely Fail, this is because node will not be reachable due to SSH is not configured at this time.
the best practice is to configure SSH manually and run cluster verify utility and make sure its not return any error.
if it returns any error that should be resolved before to begin the grid installation.

After Gird installation we should run the cluster verification utility for Post check.
runcluvfyh stage -post crsinst -n rac1,rac2

OCR and Votedisks are stored in ASM itself!, from 11g R2 onwards these two disks should be stored in ASM usually as a separate diskgroup. Prior to 11g R2 it was kept in other file system like OCFS or RAW device, Generally before to start the clusterware, oracle reads OCR file and based on the informaiton in OCR it will startup the CRS. But here we are storing OCR in ASM which is part of clusterware. so OCR will not be readable by oracle to start clusterware process!!

In order to overcome this situation Oracle has introduced new concept called Oracle Local Registry (OLR), which has mandatory information to start CRS. each node will have separate OLR file
Location of OLR will be in $GIRD_HOME/cdata/

During the cluster installation we can create only one ASM Diskgroup that too its for OCR and Votedisk.
once the installation of grid has been completed we can manually invoke ASMCA and we can create ASM Diskgroup.

Below are output for root.sh script.

[root@rac1 grid]# ./root.sh
Performing root user operation for Oracle 11g


The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

ASM created and started successfully.
Disk Group CRS_VOTE created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 287cf7a3ca8e4f21bfc7d64c8dd3d3f3.
Successfully replaced voting disk group with +CRS_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   287cf7a3ca8e4f21bfc7d64c8dd3d3f3 (/dev/oracleasm/disks/CRS_VOTE) [CRS_VOTE]     ---> Now Voting Disk is added to CRS_VOTE diskgroup
Located 1 voting disk(s).  --> It doesn't youse any redundancy
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATAFILE.dg' on 'rac1'
CRS-2676: Start of 'ora.DATAFILE.dg' on 'rac1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 grid]#



While running root.sh script found below things are happening:


1. ASM instance and Diskgroup( for OCR & Votedisk) getting created.
2. OCR & VOTING DISK are automatically assigned to diskgroup, which was specified during gird installation
3. Installes cvuqdisk-1.0.9-1 package. (which is required for HW & OS precheck on cluster verification utility)
4. Starts ASM,Diskgroups and cssd services
5. Adding Clusterware entries to inittab

Friday, April 13, 2012

How to find ORACLE_HOME path with in Database

In 9i:

SELECT substr(file_spec,1,instr(file_spec,'lib')-2) ORACLE_HOME FROM dba_libraries
WHERE library_name='DBMS_SUMADV_LIB';


In 10g:

SQL > var OHM varchar2(100);
SQL > EXEC dbms_system.get_env('ORACLE_HOME', :OHM) ;
SQL > PRINT OHM

Linux/Unix:
echo $ORACLE_HOME

Tuesday, April 10, 2012

Kill session which has KILLED status

ALTER SYSTEM KILL Session Marked for Killed Forever

Sometimes even after we killed the sessions at database level, those session might be still exist but its status could be in KILLED.

at this time use the below query to find which process to kill.

SQL> SELECT SPID FROM V$PROCESS WHERE NOT EXISTS (SELECT 1 FROM V$SESSION WHERE PADDR = ADDR);

Need to kill spid at O/S level from above query

Saturday, March 10, 2012

How to recover block which is corrupted (Physical corruption)

Today Storage team added new LUN to one of our production database, after that old LUNs were inaccessaible. Means our Index and Datafile mount points were unmounted while database was running.

Then storage and unix team worked on this issue finally they restored those two filesystem.
after that we were successfully able to startup the production database and we came to know about
block corruption during RMAN Full backup.

then based on RMAN error log we used DBVerify utility to check the number of corrupted blcoks.
oracle:prod_db@prod /oracle > dbv file=/ORA/INDEX03/prod_db/indx1K.dbf blocksize=8192
DBVERIFY: Release 10.2.0.4.0 - Production on Wed Mar 7 17:39:48 2012
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
DBVERIFY - Verification starting : FILE = /ORA/INDEX03/prod_db/indx1K.dbf
Page 121023 is influx - most likely media corrupt
Corrupt block relative dba: 0x1101d8bf (file 68, block 121023)Fractured block found during dbv:
Data in bad block:
 type: 6 format: 2 rdba: 0x1101d8bf
 last change scn: 0x05c2.8be33eb5 seq: 0x2 flg: 0x04
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000000
 check value in block header: 0x2896
 computed block checksum: 0xd58f
Page 121024 is influx - most likely media corrupt
Corrupt block relative dba: 0x1101d8c0 (file 68, block 121024)Fractured block found during dbv:
Data in bad block:
 type: 0 format: 0 rdba: 0x00000000
 last change scn: 0x0000.00000000 seq: 0x0 flg: 0x00
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x3eb50602
 check value in block header: 0x0
 block checksum disabled

DBVERIFY - Verification complete
Total Pages Examined         : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 1024625
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 1989
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 21960
Total Pages Marked Corrupt   : 2
 Total Pages Influx           : 2
Highest block SCN            : 1262329760 (1494.1262329760)
oracle:prod_db@prod /oracle >   
From above output we came to know that block# 121023,121024 got corrupted on datafile# 68. We had nearly 8 corrupted blocks from three different datafiles.

  
SQL>select * from v$database_block_corruption;

   FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
        63     499126          1                  0 CORRUPT
        62     351954          1                  0 CORRUPT
        68     121023          2                  0 FRACTURED
        62     351446          1                  0 CORRUPT
        62     352465          2                  0 FRACTURED
        63     499125          1                  0 FRACTURED
 
Initially we rebuilded all corrupted blocks belongs to the Index segment on the same tablespace. Then index corruption got cleared but still DBVfile shows same blocks are corrupted.

So we created new index tablespace then rebuilded all the indexes from old tablespace to new one.

I Hope this article helped to you. I am expecting your suggestions/feedback.

Block corruption

Today we found block corruption (both table & index blocks got corrupted) in one of our production database.
we came to konw about this block corruptions during RMAN Full backup. Initially it reported about physical block corruption.
after that we manually ran "backup validate check logical database" to find any Logical block corruption in database.
once the execution of RMAN validate command completes it reports about the block corruptions in
V$DATABASE_BLOCK_CORRUPTION view and found one table block had logical corruption.
SQL>select * from v$database_block_corruption;
   FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
        14     610816          1         6.4175E+12 LOGICAL        
        63     499126          1                  0 CORRUPT
        62     351954          1                  0 CORRUPT
        68     121023          2                  0 FRACTURED
        62     351446          1                  0 CORRUPT
        62     352465          2                  0 FRACTURED
        63     499125          1                  0 FRACTURED

Finally there were six index blocks(physical corruption) and one table block(Logical corruption) got corrupted on our production database!
For Logical corruption we used DBMS_REPAIR package to skip the corrupted block and for Index corruption we rebuilded all those indexes
from that tablespace to new tablespace and dropped old one.

Note:
In this scenario, the reason why we rebuild indexes to new tablespace is, to eliminate that corrupted block. If you are
rebuilding corrupted indexes on same tablespace corrupted block won't get remove, still it will show if you ran DBVerify utility.

Tuesday, February 21, 2012

How to give a SAN stoarge in shared across the cluster using Fiber channel

In RAC environment storage should be in shared which means that all node in the cluster should be able to access the same storage array.

If we are using fiber channel method to connect from host to SAN storage. For each host there will be one unique WWN (World Wide Number) by giving this WWN to the LUN's which are mapped to the hosts in the cluster, then all the nodes in the cluster will be able to see those LUN's.

SCAN Listener

Single Client Access Name (SCAN) is a New Oracle RAC 11g R2 feature that provides a single name for clients to access Oracle Databases running in a cluster.

The benefit is that the client's connect information does not need to change if you add or remove nodes in the cluster. It provides Load balancing and Faliover for client connections to the database.

SCAN can be configured using either DNS (Domain Name Service) or GNS (Grid Naming Service).

SCAN IP address must be on the same subnet as that of our public network in the cluster.

It uses single name which resolves multiple IPs, we can check the scan configuration in DNS using nslookup

How ASM Disks are Mounting during Host Startup/Reboot while using ASMLIB

In Linux environment we are creating ASM disk uinsg Oracleasm utility which will be created after ASMLIB package has been installed. Here you might get a question on how this ASM disks are getting mounted during host startup/reboot. because in this method Linux admin will not add any entry for the new disk on neigher /etc/fstab not /etc/sysconfig/rawdevices.

The answer is, after we stamped a disk using ASMLIB, while starting up host, all the asm disks gets mounted while starting up the ASMLIB services.

Moreover while creating new disk using createdisk command itself it will assign appropripate user,group and its permission.