Current Position:Home > Clusterware Install:root.sh- Failure at final check of Oracle CRS stack. 10

Clusterware Install:root.sh- Failure at final check of Oracle CRS stack. 10

Update:11-30Source: network consolidation
Advertisement
Hello All,
Image: !http://systemwars.com/rac/cluster_back.jpg!
I was attempting to perform the steps in:
Link: http://www.oracle-base.com/articles/11g/OracleDB11gR1RACInstallationOnLinuxUsingNFS.php
The only difference is that I decided to use fedora core 12 instead. I did this because I added a second NIC card (USB) and only FC12 would recognize it. I tried to get it to work on Cent 5 but it just wouldn't. The second nic on each machine eth1 are connected via crossover cable, and the interfaces can ping each other just fine, rac1-priv and rac2-priv.
So here is my setup:
# Public
192.168.2.11 rac1.localdomain rac1
192.168.2.12 rac2.localdomain rac2
#Private
192.168.0.11 rac1-priv.localdomain rac1-priv
192.168.0.12 rac2-priv.localdomain rac2-priv
#Virtual
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip
#NAS
192.168.2.10 mini.localdomain mini
Mini refers to my Mac mini which I decided to use as the 3rd "server" in the group. I was able to mount/read & write to the file systems just fine. As you can see.
[[email protected] ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rac1-lv_root
8063408 5156268 2497540 68% /
tmpfs 1417456 0 1417456 0% /dev/shm
/dev/sda1 198337 22080 166017 12% /boot
mini:/shared_config 488050688 76719808 411074880 16% /u01/shared_config
mini:/shared_crs 488050688 76719808 411074880 16% /u01/app/crs/product/11.1.0/crs
mini:/shared_home 488050688 76719808 411074880 16% /u01/app/oracle/product/11.1.0/db_1
mini:/shared_data 488050688 76719808 411074880 16% /u01/oradata
[[email protected] ~]# ssh rac2
Last login: Mon Dec 21 19:33:38 2009 from rac1.localdomain
[[email protected] ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rac2-lv_root
8063408 4958008 2695800 65% /
tmpfs 1417456 0 1417456 0% /dev/shm
/dev/sda1 198337 22063 166034 12% /boot
mini:/shared_config 488050688 76719808 411074880 16% /u01/shared_config
mini:/shared_crs 488050688 76719808 411074880 16% /u01/app/crs/product/11.1.0/crs
mini:/shared_home 488050688 76719808 411074880 16% /u01/app/oracle/product/11.1.0/db_1
mini:/shared_data 488050688 76719808 411074880 16% /u01/oradata[color]
CLUSTER VERIFY SEEMS OK APART FROM ONE WARNING
WARNING:
Could not find a suitable set of interfaces for VIPs.
which according to this link, "can be safety ignored", although I noticed in the link its an actual ERROR and not a WARNING => http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_11.shtml . I also noted that it saw the public IPs as the possible priv IPs, which I also thought could safety be ignored.
[email protected] clusterware]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
  Destination Node                      Reachable?             
  rac2                                  yes                    
  rac1                                  yes                    
Result: Node reachability check passed from node "rac1".
Checking user equivalence...
Check: User equivalence for user "oracle"
  Node Name                             Comment                
  rac2                                  passed                 
  rac1                                  passed                 
Result: User equivalence check passed for user "oracle".
Checking administrative privileges...
Check: Existence of user "oracle"
  Node Name     User Exists               Comment                
  rac2          yes                       passed                 
  rac1          yes                       passed                 
Result: User existence check passed for "oracle".
Check: Existence of group "oinstall"
  Node Name     Status                    Group ID               
  rac2          exists                    501                    
  rac1          exists                    501                    
Result: Group existence check passed for "oinstall".
Check: Membership of user "oracle" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Comment    
  rac2              yes           yes           yes           yes           passed     
  rac1              yes           yes           yes           yes           passed     
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node "rac2"
  Interface Name    IP Address    Subnet        Subnet Gateway  Default Gateway  Hardware Address
  eth0              192.168.2.12  192.168.2.0   0.0.0.0       192.168.2.1   00:01:6C:XXXX
  eth2              192.168.0.12  192.168.0.0   0.0.0.0       192.168.2.1   00:25:4B:XXXX
Interface information for node "rac1"
  Interface Name    IP Address    Subnet        Subnet Gateway  Default Gateway  Hardware Address
  eth0              192.168.2.11  192.168.2.0   0.0.0.0       192.168.2.1   00:01:6CXXXXX
  eth1              192.168.0.11  192.168.0.0   0.0.0.0       192.168.2.1   00:25:4B:XXXX
Check: Node connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?     
  rac2:eth0                       rac1:eth0                       yes            
Result: Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
Check: Node connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?     
  rac2:eth2                       rac1:eth1                       yes            
Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.
Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect:
rac2 eth0:192.168.2.12
rac1 eth0:192.168.2.11
WARNING:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check passed.
Checking system requirements for 'crs'...
Check: Total memory
  Node Name     Available                 Required                  Comment  
  rac2          2.7GB (2834912KB)         1GB (1048576KB)           passed   
  rac1          2.7GB (2834912KB)         1GB (1048576KB)           passed   
Result: Total memory check passed.
Check: Free disk space in "/tmp" dir
  Node Name     Available                 Required                  Comment  
  rac2          4.58GB (4805204KB)        400MB (409600KB)          passed   
  rac1          10.51GB (11015624KB)      400MB (409600KB)          passed   
Result: Free disk space check passed.
Check: Swap space
  Node Name     Available                 Required                  Comment  
  rac2          2GB (2097144KB)           1.5GB (1572864KB)         passed   
  rac1          3GB (3145720KB)           1.5GB (1572864KB)         passed   
Result: Swap space check passed.
Check: System architecture
  Node Name     Available                 Required                  Comment  
  rac2          i686                      i686                      passed   
  rac1          i686                      i686                      passed   
Result: System architecture check passed.
Check: Kernel version
  Node Name     Available                 Required                  Comment  
  rac2          2.6.31.5-127.fc12.i686.PAE  2.6.9                     passed   
  rac1          2.6.31.5-127.fc12.i686.PAE  2.6.9                     passed   
Result: Kernel version check passed.
Check: Package existence for "make-3.81"
  Node Name                       Status                          Comment        
  rac2                            make-3.81-18.fc12.i686          passed         
  rac1                            make-3.81-18.fc12.i686          passed         
Result: Package existence check passed for "make-3.81".
Check: Package existence for "binutils-2.17.50.0.6"
  Node Name                       Status                          Comment        
  rac2                            binutils-2.19.51.0.14-34.fc12.i686  passed         
  rac1                            binutils-2.19.51.0.14-34.fc12.i686  passed         
Result: Package existence check passed for "binutils-2.17.50.0.6".
Check: Package existence for "gcc-4.1.1"
  Node Name                       Status                          Comment        
  rac2                            gcc-4.4.2-7.fc12.i686           passed         
  rac1                            gcc-4.4.2-7.fc12.i686           passed         
Result: Package existence check passed for "gcc-4.1.1".
Check: Package existence for "libaio-0.3.106"
  Node Name                       Status                          Comment        
  rac2                            libaio-0.3.107-9.fc12.i686      passed         
  rac1                            libaio-0.3.107-9.fc12.i686      passed         
Result: Package existence check passed for "libaio-0.3.106".
Check: Package existence for "libaio-devel-0.3.106"
  Node Name                       Status                          Comment        
  rac2                            libaio-devel-0.3.107-9.fc12.i686  passed         
  rac1                            libaio-devel-0.3.107-9.fc12.i686  passed         
Result: Package existence check passed for "libaio-devel-0.3.106".
Check: Package existence for "libstdc++-4.1.1"
  Node Name                       Status                          Comment        
  rac2                            libstdc++-4.4.2-7.fc12.i686     passed         
  rac1                            libstdc++-4.4.2-7.fc12.i686     passed         
Result: Package existence check passed for "libstdc++-4.1.1".
Check: Package existence for "elfutils-libelf-devel-0.125"
  Node Name                       Status                          Comment        
  rac2                            elfutils-libelf-devel-0.143-1.fc12.i686  passed         
  rac1                            elfutils-libelf-devel-0.143-1.fc12.i686  passed         
Result: Package existence check passed for "elfutils-libelf-devel-0.125".
Check: Package existence for "sysstat-7.0.0"
  Node Name                       Status                          Comment        
  rac2                            sysstat-9.0.4-4.fc12.i686       passed         
  rac1                            sysstat-9.0.4-4.fc12.i686       passed         
Result: Package existence check passed for "sysstat-7.0.0".
Check: Package existence for "compat-libstdc++-33-3.2.3"
  Node Name                       Status                          Comment        
  rac2                            compat-libstdc++-33-3.2.3-68.i686  passed         
  rac1                            compat-libstdc++-33-3.2.3-68.i686  passed         
Result: Package existence check passed for "compat-libstdc++-33-3.2.3".
Check: Package existence for "libgcc-4.1.1"
  Node Name                       Status                          Comment        
  rac2                            libgcc-4.4.2-7.fc12.i686        passed         
  rac1                            libgcc-4.4.2-7.fc12.i686        passed         
Result: Package existence check passed for "libgcc-4.1.1".
Check: Package existence for "libstdc++-devel-4.1.1"
  Node Name                       Status                          Comment        
  rac2                            libstdc++-devel-4.4.2-7.fc12.i686  passed         
  rac1                            libstdc++-devel-4.4.2-7.fc12.i686  passed         
Result: Package existence check passed for "libstdc++-devel-4.1.1".
Check: Package existence for "unixODBC-2.2.11"
  Node Name                       Status                          Comment        
  rac2                            unixODBC-2.2.14-6.fc12.i686     passed         
  rac1                            unixODBC-2.2.14-9.fc12.i686     passed         
Result: Package existence check passed for "unixODBC-2.2.11".
Check: Package existence for "unixODBC-devel-2.2.11"
  Node Name                       Status                          Comment        
  rac2                            unixODBC-devel-2.2.14-6.fc12.i686  passed         
  rac1                            unixODBC-devel-2.2.14-9.fc12.i686  passed         
Result: Package existence check passed for "unixODBC-devel-2.2.11".
Check: Package existence for "glibc-2.5-12"
  Node Name                       Status                          Comment        
  rac2                            glibc-2.11-2.i686               passed         
  rac1                            glibc-2.11-2.i686               passed         
Result: Package existence check passed for "glibc-2.5-12".
Check: Group existence for "dba"
  Node Name     Status                    Comment                
  rac2          exists                    passed                 
  rac1          exists                    passed                 
Result: Group existence check passed for "dba".
Check: Group existence for "oinstall"
  Node Name     Status                    Comment                
  rac2          exists                    passed                 
  rac1          exists                    passed                 
Result: Group existence check passed for "oinstall".
Check: User existence for "nobody"
  Node Name     Status                    Comment                
  rac2          exists                    passed                 
  rac1          exists                    passed                 
Result: User existence check passed for "nobody".
System requirement passed for 'crs'
Pre-check for cluster services setup was successful.  So now here is the actual problem:
After the installation and during the run of the root.sh I get:
Failure at final check of Oracle CRS stack.
10
[[email protected] crs]# ./root.sh
WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root
WARNING: directory '/u01/app/crs/product' is not owned by root
WARNING: directory '/u01/app/crs' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root
The directory '/u01/app/crs/product' is not owned by root. Changing owner to root
The directory '/u01/app/crs' is not owned by root. Changing owner to root
The directory '/u01/app' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /u01/shared_config/voting_disk
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10According to this link => http://blog.contractoracle.com/2009/01/failure-at-final-check-of-oracle-crs.html
To recover from a status 10, one must check:
check firewall / routing / iptables issues
Now I have turned iptables off completely it doesnt even start up at boot time, so I know it can't be that.
ROUTE
[[email protected] clusterware]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.2.0 * 255.255.255.0 U 1 0 0 eth0
192.168.0.0 * 255.255.255.0 U 1 0 0 eth1
default 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
[[email protected] ~]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.2.0 * 255.255.255.0 U 1 0 0 eth0
192.168.0.0 * 255.255.255.0 U 1 0 0 eth2
default 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
[[email protected] clusterware]$ traceroute rac2
traceroute to rac2 (192.168.2.12), 30 hops max, 60 byte packets
1 rac2.localdomain (192.168.2.12) 0.424 ms 0.427 ms 0.096 ms
[[email protected] clusterware]$ traceroute rac2-priv
traceroute to rac2-priv (192.168.0.12), 30 hops max, 60 byte packets
1 rac2-priv.localdomain (192.168.0.12) 1.336 ms 1.238 ms 1.188 ms
[[email protected] clusterware]$ traceroute rac2-vip
traceroute to rac2-vip (192.168.2.112), 30 hops max, 60 byte packets
1 rac1.localdomain (192.168.2.11) 2999.599 ms !H 2999.560 ms !H 2999.523 ms !H
[[email protected] bin]$ ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
Both rac1 and rac2 get the same output above with the -vip getting !H => !H, !N, or !P (host, network or protocol unreachable), I am assuming this is normal as CRS install did not complete successfully and the virtual IP is not bound yet.
Im pretty sure I have some kind of networking issue here, but I cant put my finger on it. I have tried absolutely everything that is suggested on the internet that I could find. Even deleting the /tmp/.oracle and /var/tmp/.oracle but nothing works. Ssh keys for root and oracle users exist and Ive connected using every possible combination to avoid that first time ssh prompt so users oracle on each node goes directly into rac1/rac2 rac1-priv/rac2-priv & actual IPs as well. Any ideas?
Edited by: Javier on Dec 30, 2009 12:34 PM
Edited by: Javier on Dec 30, 2009 6:58 PM

The Best Answer

Advertisement
Hello
Note 370605.1 (Clusterware Intermittently Hangs And Commands Fail With CRS-184) is telling this.
"This is caused by a cron job that cleans up the /tmp directory which also removes the Oracle socket files in /tmp/.oracle
Do not remove /tmp/.oracle or /var/tmp/.oracle or its files while Oracle Clusterware is up."
Best Regards...
  • Clusterware Install:root.sh- Failure at final check of Oracle CRS stack. 10 Update:11-30

    Hello All, Image: !http://systemwars.com/rac/cluster_back.jpg! I was attempting to perform the steps in: Link: http://www.oracle-base.com/articles/11g/OracleDB11gR1RACInstallationOnLinuxUsingNFS.php The only difference is that I decided to use fedora

  • Failure at final check of Oracle CRS stack.10  on the second node Update:11-30

    Hi, I am trying to install Oracle Clusterware 10.2.0.1.0 in VM machines (2 nodes config) in Linux (OEL5) using VMware Server (2.0). Everything went very well one the first node upto running the root.sh. Running root.sh ended with Failure at final che

  • Failure at final check of oracle crs stack Update:11-30

    Hi, I am installing oracle clusterware 11, when I run on the first node root.sh it's ok, but when I run it on the second node recive this message: Failure at final check of Oracle CRS stack 10 I have stopped firewall and ssh,scp works fine without pa

  • Failure at final check of Oracle CRS stack. 10 on the first node. Update:11-30

    Hi everyone I trying to install an Oracle RAC 10gr2 on an Oracle Enterprise Linux AS release 4 (October Update 7) , but I'm having this problem [email protected] crs# ./root.sh Checking to see if Oracle CRS stack is already configured Setting the permissi

  • CRS installation: Failure at final check of Oracle CRS stack.10 Update:11-30

    Hello, I am trying to install Oracle RAC in 10GR2 to simulate a migration from 1024 to 11GR2. I am using VMWARE with two Linux CentOS 64b 6.2 and shared disks as raw devices. I got "Failure at final check of Oracle CRS stack.10" when running roo

  • Failure at final  check of Oracle CRS stack.AIX oracle 10g RAC with GPFS Update:11-30

    HI ,I install oracle 10R2 RAC using GPFS,os is AIX 6.1,but when I installed  CRS,at executing the second root.sh ,I am in trouble ,the error information as follow : Failure at final  check of Oracle CRS stack. 10 I look up the log file the informatio

  • Failure at Final Check of Oracle CRS Stack. ... 10 Update:11-30

    Hi, when i run the root.sh script on second node,i'm facing this issue.i checked with my alert log. " /dev/raw/raw2" is showing as inaccessible. how to resolve this issue. [client(1449)]CRS-1006:The OCR location /dev/raw/raw2 is inaccessible. De

  • Failure at final check of Oracle CRS stac Update:11-30

    +++crs ID conflicts ocrcheck scsi ++Referred metalink note:344994.1 but it was specific to RAW devices, Need a solution for scsi device. [email protected] bin # ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) :

  • Oracle CRS stack is already configured while running root.sh Update:10-11

    Hi, I am trying to install cluster ware in for 10g (10.2.0.1)RAC in RHEL5 node 1 and node 2 servers. Already i tried install clusterware but my installation failed in last step. So i uninstalled and tried my installation again from node 1. But when i

  • Clusterware install on HPUX 11.23 on 2node cluster (HP Integrity itanium) Update:10-11

    We are trying to configure a two node cluster to host Oracle 10gR2 RAC environment. The servers are HP-Integrity BL-86C servers, each with two Itanium CPUs and HP-UX 11.23. There are two four network ports available, which are connected to a Gigabit