2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
5) Print release information for this cluster node
* q) Quit
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
q) Return to the Main Menu
Sun Cluster framework software on each machine in the new cluster
before you select this option.
installer when you install the Sun Cluster framework on any of the new
nodes, then you must configure either the remote shell (see rsh(1)) or
the secure shell (see ssh(1)) before you select this option. If rsh or
ssh is used, you must enable root access to all of the new member
nodes from this node.
For most clusters, you can use Typical mode. However, you might need
to select the Custom mode option if not all of the Typical defaults
can be applied to your cluster.
modes, select the Help option from the menu.
2) Custom
q) Return to the Main Menu
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.
cluster configuration. List one node name per line. When finished,
type Control-D:
Node name (Control-D to finish): node02
Node name (Control-D to finish): ^D
node02
process without remote shell access.
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.
add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected to
the private cluster interconnect will never be able to actually join
the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster
(see keyserv(1M), publickey(4)).
networks. Configuring a cluster with just one private interconnect
provides less availability and will require the cluster to spend more
time in automatic recovery if that private interconnect fails.
interconnect. That is, no cluster switches are configured. However,
when there are greater than two nodes, this interactive form of
scinstall assumes that there will be exactly one switch for each
private network.
cluster. These are the adapters which attach to the private cluster
interconnect.
2) e1000g3
3) Other
Verification completed. No traffic was detected over a 10 second
sample period.
2) e1000g3
3) Other
Verification completed. No traffic was detected over a 10 second
sample period.
this IP address is already in use elsewhere within your enterprise,
specify another address from the range of recommended private
addresses (see RFC 1918 for details).
as long as it minimally masks all bits that are given in the network
address.
address range that supports a cluster with a maximum of 64 nodes, 10
private networks and 0 virtual clusters.
Plumbing network address 172.16.0.0 on adapter e1000g3 >> NOT DUPLICATE ... done
Plumbing network address 172.16.0.0 on adapter e1000g2 >> NOT DUPLICATE ... done
Plumbing network address 172.16.0.0 on adapter e1000g3 >> NOT DUPLICATE ... done
when the cluster interconnect between nodes is lost. By default,
fencing is turned on for global fencing, and each disk uses the global
fencing setting. This screen allows you to turn off the global
fencing.
when at least one of the following conditions is true: 1) Your shared
storage devices, such as Serial Advanced Technology Attachment (SATA)
disks, do not support SCSI; 2) You want to allow systems outside your
cluster to access storage devices attached to your cluster; 3) Sun
Microsystems has not qualified the SCSI persistent group reservation
(PGR) support for your shared storage devices.
starts you can still use the cluster(1CL) command to turn on global
fencing.
default, scinstall selects and configures a shared disk quorum device
for you.
configuration of a quorum device.
devices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.
to use a quorum device that is not a shared disk, you must instead use
clsetup(1M) to manually configure quorum once both nodes have joined
the cluster for the first time.
/global/.devices/node@<nodeID> before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
raw disk partition which scinstall can use to create the global
devices file system. This file system or partition should be at least
512 MB in size.
system, and mount it on /global/.devices/node@<nodeid>.
empty. If a raw disk partition is used, a new file system will be
created for you.
from a lofi device by using the file /.globaldevices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
Is it okay to use this default (yes/no) [yes]?
Is it okay to use this default (yes/no) [yes]?
the new cluster nodes. If cluster check detects problems, you can
either interrupt the process or check the log files after the cluster
has been established.
Started cluster check on "node02".
cluster check completed with no errors or warnings for "node02".
Rebooting "node02" ... done
Rebooting "node01" ...