删除GRID集群节点:参考oracle database 11g RAC手册(第二版)
目前GRID集群中节点信息:
[grid@node1 ~]$ olsnodes
node1
node2
node3
node4
node5
node6
目标删除node3、node4节点,保留GI集群4个节点工作。
1、检查当前集群服务运行状态
[grid@node1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.FLASH.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.GRIDDG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.LTDG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.ORADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ONLINE ONLINE node3 Started
ONLINE ONLINE node4 Started
ONLINE ONLINE node5 Started
ONLINE ONLINE node6 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
OFFLINE OFFLINE node3
OFFLINE OFFLINE node4
OFFLINE OFFLINE node5
OFFLINE OFFLINE node6
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.registry.acfs
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node3
ONLINE ONLINE node4
ONLINE ONLINE node5
ONLINE ONLINE node6
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node4
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node3
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node6
ora.cvu
1 ONLINE ONLINE node3
ora.ltdb.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.node3.vip
1 ONLINE ONLINE node3
ora.node4.vip
1 ONLINE ONLINE node4
ora.node5.vip
1 ONLINE ONLINE node5
ora.node6.vip
1 ONLINE ONLINE node6
ora.oadb.db
1 ONLINE ONLINE node6 Open
ora.oc4j
1 ONLINE ONLINE node2
ora.scan1.vip
1 ONLINE ONLINE node4
ora.scan2.vip
1 ONLINE ONLINE node3
ora.scan3.vip
1 ONLINE ONLINE node6
[grid@node1 ~]$
2、从集群中删除node3、node4节点(没有启用GNS服务)
(1)在node1上,以root身份,运行如下命令,终止node3、node4节点的使用:
[root@node1 ~]# cd /u01/app/11.2.0.4/grid/bin/
[root@node1 bin]# ./crsctl unpin css -n node3
CRS-4667: Node node3 successfully unpinned.
[root@node1 bin]# ./crsctl unpin css -n node4
CRS-4667: Node node4 successfully unpinned.
(2)以root身份,在node3、node4节点上运行$GRID_HOME/crs/install目录下的rootcrs.pl脚本,已禁用集群资源。
[root@node3 ~]# cd /u01/app/11.2.0.4/grid/crs/install/
[root@node3 install]# ./rootcrs.pl -deconfig -force
[root@node4 ~]# cd /u01/app/11.2.0.4/grid/crs/install/
[root@node4 install]# ./rootcrs.pl -deconfig -force
[grid@node1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.FLASH.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.GRIDDG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.LISTENER.lsnr
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.LTDG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.ORADG.dg
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.asm
ONLINE ONLINE node1 Started
ONLINE ONLINE node2 Started
ONLINE ONLINE node5 Started
ONLINE ONLINE node6 Started
ora.gsd
OFFLINE OFFLINE node1
OFFLINE OFFLINE node2
OFFLINE OFFLINE node5
OFFLINE OFFLINE node6
ora.net1.network
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.ons
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
ora.registry.acfs
ONLINE ONLINE node1
ONLINE ONLINE node2
ONLINE ONLINE node5
ONLINE ONLINE node6
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node5
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node2
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node6
ora.cvu
1 ONLINE ONLINE node5
ora.ltdb.db
1 ONLINE ONLINE node1 Open
2 ONLINE ONLINE node2 Open
ora.node1.vip
1 ONLINE ONLINE node1
ora.node2.vip
1 ONLINE ONLINE node2
ora.node5.vip
1 ONLINE ONLINE node5
ora.node6.vip
1 ONLINE ONLINE node6
ora.oadb.db
1 ONLINE ONLINE node6 Open
ora.oc4j
1 ONLINE ONLINE node2
ora.scan1.vip
1 ONLINE ONLINE node5
ora.scan2.vip
1 ONLINE ONLINE node2
ora.scan3.vip
1 ONLINE ONLINE node6
[grid@node1 ~]$
(3)以root用户,在node1上执行一下命令(11.2.0.4->可省略):
[root@node1 ~]# cd /u01/app/11.2.0.4/grid/bin/
[root@node1 bin]# ./crsctl delete node -n node3
CRS-4661: Node node3 successfully deleted.
[root@node1 bin]# ./crsctl delete node -n node4
CRS-4661: Node node4 successfully deleted.
(4)以软件拥有者身份,执行以下命令,以更新oracle inventory:
[grid@node3 bin]$ pwd
/u01/app/11.2.0.4/grid/oui/bin
[grid@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.4/grid "CLUSTER_NODES={node3}" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 16386 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@node4 bin]$ pwd
/u01/app/11.2.0.4/grid/oui/bin
[grid@node4 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.4/grid "CLUSTER_NODES={node4}" CRS=TRUE -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 16386 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
(5)在node3、node4节点上,以软件拥有者身份,执行以下命令,删除oracle网格主目录:
[grid@node3 ~]$ cd /u01/app/11.2.0.4/grid/deinstall/
[grid@node3 deinstall]$ ./deinstall -local
[grid@node4 ~]$ cd /u01/app/11.2.0.4/grid/deinstall/
[grid@node4 deinstall]$ ./deinstall -local
(6)在其他节点上,以软件拥有者身份,执行以下命令,更新剩余所有节点上的oracle inventory目录:
[grid@node1 bin]$ pwd
/u01/app/11.2.0.4/grid/oui/bin
[grid@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.4/grid "CLUSTER_NODES={node1,node2,node5,node6}" CRS=TRUE
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 15673 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
(7)验证卸载情况:
[grid@node1 bin]$ cluvfy stage -post nodedel -n node_list[-verbose]
Performing post-checks for node removal
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful.
[grid@node1 bin]$ olsnodes
node1
node2
node5
node6
3)扩展ltdb数据库从2个节点(node1、node2)到4个节点(node1、node2、node5、node6):
[oracle@node1 ~]$ srvctl config database -d ltdb
Database unique name: ltdb
Database name: ltdb
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1
Oracle user: oracle
Spfile: +ORADG/ltdb/spfileltdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ltdb
Database instances: ltdb1,ltdb2
Disk Groups: ORADG,FLASH
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@node1 ~]$ srvctl status database -d ltdb
Instance ltdb1 is running on node node1
Instance ltdb2 is running on node node2
dbca-->instance managerment-->add isntance 添加实例
------------------------操作过程中遇到的错误---------------------------------
[grid@node1 ~]$ srvctl add instance -d ltdb -i ltdb5 -n node5
PRCD-1051 : Failed to add instance to database ltdb
PRCS-1011 : Failed to modify server pool ltdb
PRCS-1014 : Server node5 is already part of server pool ltdb
[grid@node1 ~]$ crsctl status serverpool -p
NAME=ora.ltdb
IMPORTANCE=1
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=node1 node2 node5 node6
PARENT_POOLS=Generic
EXCLUSIVE_POOLS=
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
目前数据库正在做测试,这个问题后续解决。