[20151110]Oracle Direct NFS Client.txt

 在Oracle 11g中引入了Direct Network File System(Oracle Direct NFS)的新特性,通过一个打包在Oracle内核中的NFS客户机以改善实
例使用NFS时的性能,同时进一步完善了通过NFS实现RAC的解决方案。常规的NFS客户端软件一般由操作系统供应商提供,这类NFS客户端
不会专门为Oracle数据文件的IO做优化。而通过内建的Oracle Direct NFS,数据库将可以直接访问NFS服务器上的文件,避免由OS内核
NFS造成的额外开销。Oracle宣称由以上优化所带来的性能提升,在DSS环境中超过40%,而在OLTP环境中超过10%(详见<Oracle Database
11g  Direct NFS Client a white paper>)。

使用$ORACLE_HOME/dbs/oranfstab来配置Direct NFS客户机;该oranfstab配置文件可以包括Server,path,export以及mount参数,各参数
代表的属性如下:

    Server:NFS服务器名
    Path:到达NFS服务器的最多4个网络路径,可以是IP或者主机名
    Export:从NFS服务器导出的路径
    Mount:NFS的本地装载点

在正式启用Direct NFS客户机前,NFS文件系统应当已由常规NFS方式mount并且可用。为了启用Direct NFS client,我们还需要将标准的
Oracle磁盘管理库(Oracle Disk Manager (ODM) library)替换为支持Direct NFS client的NFS ODM。可以通过建立从标准ODM库指向NFS
ODM库的符号链接来完成以上工作,但是需要注意的是以上操作仅能在实例关闭的情况下才能实施并且有效。

--说明:以前我测试过,不知道为什么没有成功,不知道问题在那里,我仅仅估计当时的测试环境linux版本太低(rh 4,3),当时为了安装11g,
--自己还升级了几个包,也许在某个地方出现问题,再加上没有这种需求,也就再也没有测试过.

1.建立链接: 
--注意在关闭数据库下进行. 
$ cd $ORACLE_HOME/lib 
$ ls -l  libodm11.so 
lrwxrwxrwx 1 oracle oinstall 12 Aug 17 15:58 libodm11.so -> libodmd11.so$ ln -s libnfsodm11.so libodm11.so 
$ ls -l  libodm11.so 
lrwxrwxrwx 1 oracle oinstall 14 Nov 10 07:51 libodm11.so -> libnfsodm11.so$ mkdir /mnt/ramdisk/nfs
2.建立nfs服务器(IP=192.168.101.115): 
--IP=192.168.101.115 
# mkdir /u01/nfs 
# chmod 777 /u01/nfs--建立/exc/exports文件,加入: 
/u01/nfs *(rw,no_root_squash,insecure) 
--注:我以前估计错误在这里,我设置成/u01/nfs *(rw,sync,no_root_squash)--启动服务. 
# service rpcbind restart 
Stopping rpcbind:                                          [  OK  ] 
Starting rpcbind:                                          [  OK  ] 
--注:我使用centos 6.2,使用rpcbind代替portmap.# service nfs start 
Starting NFS services:                                     [  OK  ] 
Starting NFS quotas:                                       [  OK  ] 
Starting NFS daemon:                                       [  OK  ] 
Starting NFS mountd:                                       [  OK  ]# showmount -e 192.168.101.115 
Export list for 192.168.101.115: 
/u01/nfs *--在数据库机器上测试nfs是否正常: 
# mount -t nfs 192.168.101.115:/u01/nfs /mnt/ramdisk/nfs/# mount | grep nfs 
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) 
192.168.101.115:/u01/nfs on /mnt/ramdisk/nfs type nfs (rw,addr=192.168.101.115)--ok nfs建立访问正常!
3.建立oranfstab文件:
$ cd /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs 
$ cat oranfstab 
server: 192.168.101.115 
path: 192.168.101.115 
export: /u01/nfs  mount: /mnt/ramdisk/nfs4.开始启动数据库: 
--在alert*.log存在如下内容:Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0 
...5.开始测试: 
SYS@book> select * from v$dnfs_servers; 
no rows selectedSYS@book> select * from v$dnfs_files; 
no rows selected--没有文件显示是正常的,但是v$dnfs_servers这里看不到内容是我困惑的地方.也许上次错误在这里,没有继续下去.
SYS@book> create tablespace dnfs datafile '/mnt/ramdisk/nfs/dnfs01.dbf' size 2M; 
Tablespace created.SYS@book> column filename format a40 
SYS@book> select * from v$dnfs_files; 
FILENAME                                   FILESIZE       PNUM     SVR_ID 
---------------------------------------- ---------- ---------- ---------- 
/mnt/ramdisk/nfs/dnfs01.dbf                 2105344         10          1SYS@book> select * from v$dnfs_servers; 
        ID SVRNAME                        DIRNAME                           MNTPORT    NFSPORT      WTMAX      RTMAX 
---------- ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- 
         1 192.168.101.115                /u01/nfs                            56759       2049     524288     524288 
--ok!这次成功了.真不知道上次问题在那里.在alert*.log记录如下: 
Tue Nov 10 09:09:10 2015 
create tablespace dnfs datafile '/mnt/ramdisk/nfs/dnfs01.dbf' size 2M 
Tue Nov 10 09:09:11 2015 
Direct NFS: channel id [0] path [192.168.101.115] to filer [192.168.101.115] via local [] is UP 
Direct NFS: channel id [1] path [192.168.101.115] to filer [192.168.101.115] via local [] is UP 
Completed: create tablespace dnfs datafile '/mnt/ramdisk/nfs/dnfs01.dbf' size 2M--从这里说明这次成功了.
SYS@book> drop tablespace dnfs including contents and datafiles cascade constraints; 
Tablespace dropped.--总的感觉很慢,我改变mount参数再测试看看.
6.改变mount参数再测试: 
# umount /mnt/ramdisk/nfs 
# mount -t nfs 192.168.101.115:/u01/nfs /mnt/ramdisk/nfs/ -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,nfsvers=3,timeo=600,actimeo=0SYS@book> column SVRNAME format a30 
SYS@book> column DIRNAME format a30 
SYS@book> select * from v$dnfs_servers; 
        ID SVRNAME                        DIRNAME                           MNTPORT    NFSPORT      WTMAX      RTMAX 
---------- ------------------------------ ------------------------------ ---------- ---------- ---------- ---------- 
         1 192.168.101.115                /u01/nfs                            56759       2049     524288     524288SYS@book> create tablespace dnfs datafile '/mnt/ramdisk/nfs/dnfs01.dbf' size 2M; 
Tablespace created.SYS@book> drop tablespace dnfs including contents and datafiles cascade constraints; 
Tablespace dropped.--感觉没有什么变化.重启在测试看看.效果也一样.
--在nfs存在文件的情况下,查看alert*.log发现使用direct nfs: 
Tue Nov 10 09:20:10 2015 
ALTER DATABASE OPEN 
Direct NFS: attempting to mount /u01/nfs on filer 192.168.101.115 defined in oranfstab 
Direct NFS: channel config is: 
     channel id [0] local [] path [192.168.101.115] 
Direct NFS: mount complete dir /u01/nfs on 192.168.101.115 mntport 56759 nfsport 2049 
Direct NFS: channel id [0] path [192.168.101.115] to filer [192.168.101.115] via local [] is UP 
Direct NFS: channel id [1] path [192.168.101.115] to filer [192.168.101.115] via local [] is UPSYS@book> @ &r/spid 
       SID    SERIAL# SPID   C50 
---------- ---------- ------ -------------------------------------------------- 
       580          5 60110  alter system kill session '580,5' immediate;# lsof -i -P -n | grep 2049 
oracle    60088  oracle   32u  IPv4 12090792      0t0  TCP 192.168.100.78:25917->192.168.101.115:2049 (ESTABLISHED) 
oracle    60088  oracle   33u  IPv4 12094081      0t0  TCP 192.168.100.78:52324->192.168.101.115:2049 (ESTABLISHED) 
oracle    60090  oracle   32u  IPv4 12097630      0t0  TCP 192.168.100.78:21175->192.168.101.115:2049 (ESTABLISHED) 
oracle    60090  oracle   33u  IPv4 12094082      0t0  TCP 192.168.100.78:36396->192.168.101.115:2049 (ESTABLISHED) 
oracle    60110  oracle   10u  IPv6 12090793      0t0  UDP *:20496 
oracle    60110  oracle   32u  IPv4 12097629      0t0  TCP 192.168.100.78:8344->192.168.101.115:2049 (ESTABLISHED)# ps -ef | egrep "60080|60090|60110" | grep -v grep 
oracle   60080     1  0 09:20 ?        00:00:00 ora_diag_book 
oracle   60090     1  0 09:20 ?        00:00:00 ora_lgwr_book 
oracle   60110 59964  0 09:20 ?        00:00:01 oraclebook (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))