Wednesday, April 18, 2012

11g R2 RAC Grid Installation

Recently I installed Oracle Real Application Clusters 11g R2 on Enterprise Linux 5.7, Many features has been introduced in this 11g R2 version.


First thing is ASM is shipped along with clusterware software, for this Oracle named as Gird software this eliminate the need of having separate ASM_HOME. Both CRS and ASM will be in same home usually we name it as GIRD_HOME


Unlike 10g R2, Gird installation takes care of most of the things on its own, like configuring SSH,Installes cvuqdisk-1.0.9-1 package (will be installed automatically while running root.sh script on grid_home).
This cvuqdisk-1.0.9-1 rpm is mandatory to check the health of Harrdware and Operating System (HWOS) this should be done before to begin the grid installation.

$runcluvfy.sh stage -pre HWOS -n rac1,rac2 (here rac1 & rac2 are node name)

Purpose of running above command is to ensure that the shared storage is accessable by all nodes in the cluster.
same way we can check the cluster health by using below command.

$runcluvfy.sh stage -pre crsinst -n rac1,rac2 --> runcluvfy.sh will be executed from Grid binary

crsinst -- checks node rechability,physical memory,list of required RPM installed on host,User equivalence,Node connectivity,
Since SSH are configured during grid installation, If you are running cluster verification utility precheck before to begin the grid installation it will definitely Fail, this is because node will not be reachable due to SSH is not configured at this time.
the best practice is to configure SSH manually and run cluster verify utility and make sure its not return any error.
if it returns any error that should be resolved before to begin the grid installation.

After Gird installation we should run the cluster verification utility for Post check.
runcluvfyh stage -post crsinst -n rac1,rac2

OCR and Votedisks are stored in ASM itself!, from 11g R2 onwards these two disks should be stored in ASM usually as a separate diskgroup. Prior to 11g R2 it was kept in other file system like OCFS or RAW device, Generally before to start the clusterware, oracle reads OCR file and based on the informaiton in OCR it will startup the CRS. But here we are storing OCR in ASM which is part of clusterware. so OCR will not be readable by oracle to start clusterware process!!

In order to overcome this situation Oracle has introduced new concept called Oracle Local Registry (OLR), which has mandatory information to start CRS. each node will have separate OLR file
Location of OLR will be in $GIRD_HOME/cdata/

During the cluster installation we can create only one ASM Diskgroup that too its for OCR and Votedisk.
once the installation of grid has been completed we can manually invoke ASMCA and we can create ASM Diskgroup.

Below are output for root.sh script.

[root@rac1 grid]# ./root.sh
Performing root user operation for Oracle 11g


The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

ASM created and started successfully.
Disk Group CRS_VOTE created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 287cf7a3ca8e4f21bfc7d64c8dd3d3f3.
Successfully replaced voting disk group with +CRS_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   287cf7a3ca8e4f21bfc7d64c8dd3d3f3 (/dev/oracleasm/disks/CRS_VOTE) [CRS_VOTE]     ---> Now Voting Disk is added to CRS_VOTE diskgroup
Located 1 voting disk(s).  --> It doesn't youse any redundancy
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATAFILE.dg' on 'rac1'
CRS-2676: Start of 'ora.DATAFILE.dg' on 'rac1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 grid]#



While running root.sh script found below things are happening:


1. ASM instance and Diskgroup( for OCR & Votedisk) getting created.
2. OCR & VOTING DISK are automatically assigned to diskgroup, which was specified during gird installation
3. Installes cvuqdisk-1.0.9-1 package. (which is required for HW & OS precheck on cluster verification utility)
4. Starts ASM,Diskgroups and cssd services
5. Adding Clusterware entries to inittab

Friday, April 13, 2012

How to find ORACLE_HOME path with in Database

In 9i:

SELECT substr(file_spec,1,instr(file_spec,'lib')-2) ORACLE_HOME FROM dba_libraries
WHERE library_name='DBMS_SUMADV_LIB';


In 10g:

SQL > var OHM varchar2(100);
SQL > EXEC dbms_system.get_env('ORACLE_HOME', :OHM) ;
SQL > PRINT OHM

Linux/Unix:
echo $ORACLE_HOME

Tuesday, April 10, 2012

Kill session which has KILLED status

ALTER SYSTEM KILL Session Marked for Killed Forever

Sometimes even after we killed the sessions at database level, those session might be still exist but its status could be in KILLED.

at this time use the below query to find which process to kill.

SQL> SELECT SPID FROM V$PROCESS WHERE NOT EXISTS (SELECT 1 FROM V$SESSION WHERE PADDR = ADDR);

Need to kill spid at O/S level from above query