[root@node01 /]# scinstall
  *** Main Menu ***
    Please select from one of the following (*) options:
      * 1) Create a new cluster or add a cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Manage a dual-partition upgrade
        4) Upgrade this cluster node
        5) Print release information for this cluster node
      * ?) Help with menu options
      * q) Quit
    Option:  1
  *** New Cluster and Cluster Node Menu ***
    Please select from any one of the following options:
        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster
        ?) Help with menu options
        q) Return to the Main Menu
    Option:  1
  *** Create a New Cluster ***
    This option creates and configures a new cluster.
    You must use the Java Enterprise System (JES) installer to install the
    Sun Cluster framework software on each machine in the new cluster
    before you select this option.
    If the "remote configuration" option is unselected from the JES
    installer when you install the Sun Cluster framework on any of the new
    nodes, then you must configure either the remote shell (see rsh(1)) or
    the secure shell (see ssh(1)) before you select this option. If rsh or
    ssh is used, you must enable root access to all of the new member
    nodes from this node.
    Press Control-d at any time to return to the Main Menu.
    Do you want to continue (yes/no) [yes]? 
  &gt;&gt;&gt; Typical or Custom Mode <&lt;&lt;
    This tool supports two modes of operation, Typical mode and Custom.
    For most clusters, you can use Typical mode. However, you might need
    to select the Custom mode option if not all of the Typical defaults
    can be applied to your cluster.
    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.
    Please select from one of the following options:
        1) Typical
        2) Custom
        ?) Help
        q) Return to the Main Menu
    Option [1]:  2
  >&gt;&gt; Cluster Name <&lt;&lt;
    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique
    within the namespace of your enterprise.
    What is the name of the cluster you want to establish?  cluster
  >&gt;&gt; Cluster Nodes <&lt;&lt;
    This Sun Cluster release supports a total of up to 16 nodes.
    Please list the names of the other nodes planned for the initial
    cluster configuration. List one node name per line. When finished,
    type Control-D:
    Node name (Control-D to finish):  node01
    Node name (Control-D to finish):  node02
    Node name (Control-D to finish):  ^D
    This is the complete list of nodes:
        node01
        node02
    Is it correct (yes/no) [yes]? 
    Attempting to contact "node02" ... done
    Searching for a remote configuration method ... done
    The Sun Cluster framework is able to complete the configuration
    process without remote shell access.
  >&gt;&gt; Authenticating Requests to Add Nodes <&lt;&lt;
    Once the first node establishes itself as a single node cluster, other
    nodes attempting to add themselves to the cluster configuration must
    be found on the list of nodes you just provided. You can modify this
    list by using claccess(1CL) or other tools once the cluster has been
    established.
    By default, nodes are not securely authenticated as they attempt to
    add themselves to the cluster configuration. This is generally
    considered adequate, since nodes which are not physically connected to
    the private cluster interconnect will never be able to actually join
    the cluster. However, DES authentication is available. If DES
    authentication is selected, you must configure all necessary
    encryption keys before any node will be allowed to join the cluster
    (see keyserv(1M), publickey(4)).
    Do you need to use DES authentication (yes/no) [no]? 
  >&gt;&gt; Minimum Number of Private Networks <&lt;&lt;
    Each cluster is typically configured with at least two private
    networks. Configuring a cluster with just one private interconnect
    provides less availability and will require the cluster to spend more
    time in automatic recovery if that private interconnect fails.
    Should this cluster use at least two private networks (yes/no) [yes]? 
  >&gt;&gt; Point-to-Point Cables <&lt;&lt;
    The two nodes of a two-node cluster may use a directly-connected
    interconnect. That is, no cluster switches are configured. However,
    when there are greater than two nodes, this interactive form of
    scinstall assumes that there will be exactly one switch for each
    private network.
    Does this two-node cluster use switches (yes/no) [yes]?  no
  >&gt;&gt; Cluster Transport Adapters and Cables <&lt;&lt;
    You must configure the cluster transport adapters for each node in the
    cluster. These are the adapters which attach to the private cluster
    interconnect.
    Select the first cluster transport adapter for "node01":
        1) e1000g2
        2) e1000g3
        3) Other
    Option:  1
    Adapter "e1000g2" is an Ethernet adapter.
    Searching for any unexpected network traffic on "e1000g2" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    The "dlpi" transport type will be set for this cluster.
    Name of adapter on "node02" to which "e1000g2" is connected?  e1000g2
    Select the second cluster transport adapter for "node01":
        1) e1000g2
        2) e1000g3
        3) Other
    Option:  ^[2
    Adapter "e1000g3" is an Ethernet adapter.
    Searching for any unexpected network traffic on "e1000g3" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    Name of adapter on "node02" to which "e1000g3" is connected?  e1000g3
  >&gt;&gt; Network Address for the Cluster Transport <&lt;&lt;
    The cluster transport uses a default network address of 172.16.0.0. If
    this IP address is already in use elsewhere within your enterprise,
    specify another address from the range of recommended private
    addresses (see RFC 1918 for details).
    The default netmask is 255.255.240.0. You can select another netmask,
    as long as it minimally masks all bits that are given in the network
    address.
    The default private netmask and network address result in an IP
    address range that supports a cluster with a maximum of 64 nodes, 10
    private networks and 0 virtual clusters.
    Is it okay to accept the default network address (yes/no) [yes]? 
    Is it okay to accept the default netmask (yes/no) [yes]? 
    Plumbing network address 172.16.0.0 on adapter e1000g2 >&gt; NOT DUPLICATE ... done
    Plumbing network address 172.16.0.0 on adapter e1000g3 &gt;&gt; NOT DUPLICATE ... done
    Plumbing network address 172.16.0.0 on adapter e1000g2 &gt;&gt; NOT DUPLICATE ... done
    Plumbing network address 172.16.0.0 on adapter e1000g3 &gt;&gt; NOT DUPLICATE ... done
  &gt;&gt;&gt; Set Global Fencing <&lt;&lt;
    Fencing is a mechanism that a cluster uses to protect data integrity
    when the cluster interconnect between nodes is lost. By default,
    fencing is turned on for global fencing, and each disk uses the global
    fencing setting. This screen allows you to turn off the global
    fencing.
    Most of the time, leave fencing turned on. However, turn off fencing
    when at least one of the following conditions is true: 1) Your shared
    storage devices, such as Serial Advanced Technology Attachment (SATA)
    disks, do not support SCSI; 2) You want to allow systems outside your
    cluster to access storage devices attached to your cluster; 3) Sun
    Microsystems has not qualified the SCSI persistent group reservation
    (PGR) support for your shared storage devices.
    If you choose to turn off global fencing now, after your cluster
    starts you can still use the cluster(1CL) command to turn on global
    fencing.
    Do you want to turn off global fencing (yes/no) [no]? 
  >&gt;&gt; Quorum Configuration <&lt;&lt;
    Every two-node cluster requires at least one quorum device. By
    default, scinstall selects and configures a shared disk quorum device
    for you.
    This screen allows you to disable the automatic selection and
    configuration of a quorum device.
    You have chosen to turn on the global fencing. If your shared storage
    devices do not support SCSI, such as Serial Advanced Technology
    Attachment (SATA) disks, or if your shared disks do not support
    SCSI-2, you must disable this feature.
    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined
    the cluster for the first time.
    Do you want to disable automatic quorum device selection (yes/no) [no]?  yes
  >&gt;&gt; Global Devices File System <&lt;&lt;
    Each node in the cluster must have a local file system mounted on
    /global/.devices/node@&lt;nodeID> before it can successfully participate
    as a cluster member. Since the "nodeID" is not assigned until
    scinstall is run, scinstall will set this up for you.
    You must supply the name of either an already-mounted file system or a
    raw disk partition which scinstall can use to create the global
    devices file system. This file system or partition should be at least
    512 MB in size.
    Alternatively, you can use a loopback file (lofi), with a new file
    system, and mount it on /global/.devices/node@<nodeid>.
    If an already-mounted file system is used, the file system must be
    empty. If a raw disk partition is used, a new file system will be
    created for you.
    If the lofi method is used, scinstall creates a new 100 MB file system
    from a lofi device by using the file /.globaldevices. The lofi method
    is typically preferred, since it does not require the allocation of a
    dedicated disk slice.
    The default is to use /globaldevices.
For node "node01",
    Is it okay to use this default (yes/no) [yes]? 
    Testing for "/globaldevices" on "node01" ... done
For node "node02",
    Is it okay to use this default (yes/no) [yes]? 
    Testing for "/globaldevices" on "node02" ... done
    Is it okay to create the new cluster (yes/no) [yes]? 
    During the cluster creation process, cluster check is run on each of
    the new cluster nodes. If cluster check detects problems, you can
    either interrupt the process or check the log files after the cluster
    has been established.
    Interrupt cluster creation for cluster check errors (yes/no) [no]? 
  Cluster Creation
    Log file - /var/cluster/logs/install/scinstall.log.1830
    Started cluster check on "node01".
    Started cluster check on "node02".
    cluster check completed with no errors or warnings for "node01".
    cluster check completed with no errors or warnings for "node02".
    Configuring "node02" ... done
    Rebooting "node02" ... done
    Configuring "node01" ... done
    Rebooting "node01" ...
Log file - /var/cluster/logs/install/scinstall.log.1830
Rebooting ...
updating /platform/i86pc/boot_archive