Running Heartbeat clusters in release 1 compatible configuration is now considered obsolete by the Linux-HA development team. However, it is still widely used in the field, which is why it is documented here in this section.
Advantages. Configuring Heartbeat in R1 compatible mode has some advantages over using CRM configuration. In particular,
- Heartbeat R1 compatible clusters are simple and easy to configure;
- it is fairly straightforward to extend Heartbeat's functionality with custom, R1-style resource agents.
Disadvantages. Disadvantages of R1 compatible configuration, as opposed to CRM configurations, include:
- Cluster configuration must be manually kept in sync between cluster nodes, it is not propagated automatically.
- While node monitoring is available, resource-level monitoring is not. Individual resources must be monitored by an external monitoring system.
- Resource group support is limited to two resource groups. CRM clusters, by contrast, support any number, and also come with a complex resource-level constraint framework.
Another disadvantage, namely the fact that R1 style configuration limits cluster size to 2 nodes (whereas CRM clusters support up to 255) is largely irrelevant for setups involving DRBD, DRBD itself being limited to two nodes.
In R1-style clusters, Heartbeat keeps its complete configuration in three simple configuration files:
The following is an example of a Heartbeat R1-compatible resource configuration involving a MySQL database backed by DRBD:
bob drbddisk::mysql Filesystem::/dev/drbd0::/var/lib/mysql::ext3 \ 10.9.42.1 mysql
This resource configuration contains one resource group whose home node (the node where its resources are expected to run under normal circumstances) is named
bob. Consequentially, this resource group would be considered the local resource group on host
bob, whereas it would be the foreign resource group on its peer host.
The resource group includes a DRBD resource named
mysql, which will be promoted to the primary role by the cluster manager (specifically, the
drbddiskresource agent) on whichever node is currently the active node. Of course, a corresponding resource must exist and be configured in
/etc/drbd.conffor this to work.
That DRBD resource translates to the block device named
/dev/drbd0, which contains an ext3 filesystem that is to be mounted at
/var/lib/mysql(the default location for MySQL data files).
The resource group also contains a service IP address, 10.9.42.1. Heartbeat will make sure that this IP address is configured and available on whichever node is currently active.
Finally, Heartbeat will use the LSB resource agent named
mysqlin order to start the MySQL daemon, which will then find its data files at
/var/lib/mysqland be able to listen on the service IP address, 192.168.42.1.
It is important to understand that the resources listed in the
haresourcesfile are always evaluated from left to right when resources are being started, and from right to left when they are being stopped.
In three-way replication with stacked resources, it is usually desirable to have the stacked resource managed by Heartbeat just as other cluster resources. Then, your two-node cluster will manage the stacked resource as a floating resource that runs on whichever node is currently the active one in the cluster. The third node, which is set aside from the Heartbeat cluster, will have the “other half” of the stacked resource available permanently.
To have a stacked resource managed by Heartbeat, you must first configure it as outlined in the section called “Configuring a stacked resource”.
The stacked resource is managed by Heartbeat by way of the
drbdupperresource agent. That resource agent is distributed, as all other Heartbeat R1 resource agents, in
/etc/ha.d/resource.d. It is to stacked resources what the
drbddiskresource agent is to conventional, unstacked resources.
drbduppertakes care of managing both the lower-level resource and the stacked resource. Consider the following
haresourcesexample, which would replace the one given in the previous section:
bob 192.168.42.1 \ drbdupper::mysql-U Filesystem::/dev/drbd1::/var/lib/mysql::ext3 \ mysql
Note the following differences to the earlier example:
- You start the cluster IP address before all other resources. This is necessary because stacked resource replication uses a connection from the cluster IP address to the node IP address of the third node. Lower-level resource replication, by contrast, uses a connection between the “physical”node IP addresses of the two cluster nodes.
- You pass the stacked resource name to
drbdupper(in this example,
- You configure the
Filesystemresource agent to mount the DRBD device associated with the stacked resource (in this example,
/dev/drbd1), not the lower-level one.
A Heartbeat R1-style cluster node may assume control of cluster resources in the following way:
Manual resource takeover. This is the approach normally taken if one simply wishes to test resource migration, or assume control of resources for any reason other than the peer having to leave the cluster. This operation is performed using the following command:
On some distributions and architectures, you may be required to enter:
A Heartbeat R1-style cluster node may be forced to give up its resources in several ways.
- Switching a cluster node to standby mode. This is the approach normally taken if one simply wishes to test resource migration, or perform some other activity that does not require the node to leave the cluster. This operation is performed using the following command:
/usr/lib/heartbeat/hb_standbyOn some distributions and architectures, you may be required to enter:
- Shutting down the local cluster manager instance. This approach is suited for local maintenance operations such as software updates which require that the node be temporarily removed from the cluster, but which do not necessitate a system reboot. It involves shutting down all processes associated with the local cluster manager instance:
/etc/init.d/heartbeat stopPrior to stopping its services, Heartbeat will gracefully migrate any currently running resources to the peer node. This is the approach to be followed, for example, if you are upgrading DRBD to a new release, without also upgrading your kernel.
- Shutting down the local node. For hardware maintenance or other interventions that require a system shutdown or reboot, use a simple graceful shutdown command, such as
poweroffSince Heartbeat services will be shut down gracefully in the process of a normal system shutdown, the previous paragraph applies to this situation, too. This is also the approach you would use in case of a kernel upgrade (which also requires the installation of a matching DRBD version).