在linux 5上装10G RAC时,常常会碰到“libpthread.so.0: cannot open shared object file"这个报错的,这个报错是由于无法使用vipca导致的。 该报错有以下两种解决方案:
方法1
不去理会,选择继续,然后安装10.2.0.4及以上版本的patchsets,然后在来手工执行vipca完成vip配置工作,因为这个错误在10.2.0.4版本中已经得到修复
方法2

手工配置
确认网络配置
# ./oifcfg getif
eth0 172.21.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
# ./oifcfg iflist
eth0 172.21.1.0
eth1 10.10.10.0
如果不正确可以使用下列命令配置
# ./oifcfg setif -global eth0/172.21.1.0:public
# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
然后修改vipca和srvctl ,搜索LD_ASSUME_KERNEL,注释掉下列几行
arch='uname -m'
#       if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
#       then
#            LD_ASSUME_KERNEL=2.4.19
#            export LD_ASSUME_KERNEL
#       fi
再执行./vipca即可,二者原理相同
关于这个报错,Oracle有以下文档进行说明
10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures) [ID 414163.1]


Modified 04-AUG-2010     Type PROBLEM     Status ARCHIVED

 

In this Document
   ​​​Symptoms ​​​
   ​​​Cause ​​​
   ​​​Solution ​​​
   ​​​References ​




Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.3 - Release: 10.2 to 10.2
Linux x86
Generic Linux
Linux x86-64
***Checked for relevance on 04-Aug-2010***



When installing 10gR2 RAC on Oracle Enterprise Linux 5 or RHEL5 or SLES10 there are three issues that users must be aware of.

Issue#1 : To install 10gR2, you must first install the base release, which is 10.2.0.1. As these version of OS are newer, you should use the following command to invoke the installer:

​$ runInstaller -ignoreSysPrereqs        // This will bypass the OS check // ​

Issue#2 :  At end of root.sh on the last node vipca will fail to run with the following error:

​Oracle CRS stack installed and running under init(1M) ​​​
​​​Running vipca(silent) for configuring nodeapps ​​​
​​​/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading ​​​
​​​shared libraries: libpthread.so.0: cannot open shared object file: ​​​
​​​No such file or directory  ​

Also, srvctl will show similar output if workaround below is not implemented.

Issue#3 : After working around Issue#2 above, vipca will fail to run with the following error if the VIP IP's are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:

​# vipca ​​​
​​​Error 0(Native: listNetInterfaces:[3])  ​​​
​​​[Error 0(Native: listNetInterfaces:[3])] ​



These releases of the Linux kernel fix an old bug in the Linux threading that Oracle worked around using LD_ASSUME_KERNEL settings in both vipca and srvctl, this workaround is no longer valid on OEL5 or RHEL5 or SLES10 hence the failures.



If you have not yet run root.sh on the last node, implement workaround for issue#2 below and run root.sh (you may skip running the vipca portion at the bottom of this note). 
If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 and then run vipca manually.

To workaround Issue#2 above, edit vipca (in the CRS bin directory on all nodes ) to undo the setting of LD_ASSUME_KERNEL. After the IF statement around line 120 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:

​if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ] ​​​
​​​then ​​​
​​​  LD_ASSUME_KERNEL=2.4.19 ​​​
​​​  export LD_ASSUME_KERNEL ​​​
​​​fi ​​​

unset LD_ASSUME_KERNEL         <<<== Line to be added

 

Similarly for srvctl (in both the CRS and, when installed, RDBMS and ASM bin directories on all nodes ), unset LD_ASSUME_KERNEL by adding one line, around line 168 should look like this:

​LD_ASSUME_KERNEL=2.4.19 ​​​
​​​export LD_ASSUME_KERNEL ​​​

unset LD_ASSUME_KERNEL          <<<== Line to be added

Remember to re-edit these files on all nodes <CRS_HOME>/bin/vipca
<CRS_HOME>/bin/srvctl
<RDBMS_HOME>/bin/srvctl
<ASM_HOME>/bin/srvctl

after applying the 10.2.0.2 or 10.2.0.3 patchsets, as these patchset will still include those settings unnecessary for OEL5 or RHEL5 or SLES10
.   This issue was raised with development and is fixed in the 10.2.0.4 patchsets

Note that we are explicitly unsetting LD_ASSUME_KERNEL and not merely commenting out its setting to handle a case where the user has it set in their environment (login shell).

 

To workaround issue#3 (vipca failing on non-routable VIP IP ranges, manually or during root.sh), if you still have the OUI window open, click OK and it will create the "oifcfg" information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed.  Otherwise you may configure the interfaces for RAC manually using the oifcfg command as root, like in the following example (from any node):

​<CRS_HOME>/bin # ./oifcfg setif -global eth0/192.168.1.0:public  ​​​
​​​<CRS_HOME>/bin # ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect  ​​​
​​​<CRS_HOME>/bin # ./oifcfg getif  ​​​
​​​ eth0 192.168.1.0 global public  ​​​
​​​ eth1 10.10.10.0 global cluster_interconnect ​

 

The goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own IP addresses and interface name from your environment. To get the proper IPs in your environment run this command:

​<CRS_HOME>/bin # ./oifcfg iflist ​​​
​​​eth0 192.168.1.0 ​​​
​​​eth1 10.10.10.0  ​

 

If you have not yet run root.sh on the last node, implement workaround for issue #2 above and run root.sh (you may skip running the vipca portion below. If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 above, and then run vipca manually.

Running VIPCA:
After implementing the above workaround(s), you should be able invoke vipca (as root, from last node) manually and configure the VIP IPs via the GUI interface.

​<CRS_HOME>/bin # export DISPLAY=<x-display:0> ​​​
​​​<CRS_HOME>/bin # ./vipca ​

Make sure the DISPLAY environment variable is set correctly and you can open X-clock or other X applications from that shell.

Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, there is no need to re-run root.sh since vipca is the last step in root.sh. 

 

To verify the Clusterware resources are running correctly:

​<CRS_HOME>/bin # ./crs_stat -t ​​​
​​​Name           Type        Target State  Host ​​​
​​​------------------------------------------------------------ ​​​
​​​ora....ux1.gsd application ONLINE ONLINE raclinux1 ​​​
​​​ora....ux1.ons application ONLINE ONLINE raclinux1 ​​​
​​​ora....ux1.vip application ONLINE ONLINE raclinux1 ​​​
​​​ora....ux2.gsd application ONLINE ONLINE raclinux2 ​​​
​​​ora....ux2.ons application ONLINE ONLINE raclinux2 ​​​
​​​ora....ux2.vip application ONLINE ONLINE raclinux2 ​

You may now proceed with the rest of the RAC installation.



参考至:
​​​              http://space.itpub.net/8797129/viewspace-694738​​​              http://cs.felk.cvut.cz/10gr2/relnotes.102/b15659/toc.htm#CJABAIIF
             http://blog.chinaunix.net/uid-7589639-id-2921631.html