Difference between revisions of "Fail-over"
From Linux-VServer
(First draft) |
|||
(4 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
− | === Active | + | === Active VServer Failover === |
− | To perform active vserver failover without user intervention from one host to another you may use heartbeat. To do this, you must first have a mechanism to actively replicate your | + | To perform active vserver failover without user intervention from one host to another you may use heartbeat. To do this, you must first have a mechanism to actively replicate your vserver filesystem and configuration from one host to another. This mechanism must be able to provide a consistent filesystem view to either host on demand (but not necessarily at the same time). In other words you may use something like NFS, a clustered filesystem like OCFS2 or GFS, or a network replicated block device like drbd, but you cannot use something like rsync, scp or ftp. |
− | Once you have an active replication method, you will likely need to organize your vserver files to be on the same device/filesystem so that you only need one replicated device/filesystem and do not need a separate one just for your vserver configuration files. One way to do this would be to use a ''/vservers'' mount point and to have a subdirectory for each vserver in there: ''/vservers/<server-name>''. | + | ===== Organizing your VServer Directories ===== |
+ | |||
+ | Once you have an active replication method, you will likely need to organize your vserver files to be on the same device/filesystem so that you only need one replicated device/filesystem and do not need a separate one just for your vserver configuration files. One way to do this would be to use a ''/vservers'' mount point and to have a subdirectory for each vserver in there: ''/vservers/<server-name>''. If you want to put both the ''/var'' and ''/etc'' sections of your vserver in the vserver's subdirectory and soft link to them you may be tempted to try this arrangement: | ||
/vservers/<server-name>/etc | /vservers/<server-name>/etc | ||
Line 11: | Line 13: | ||
/var/lib/vservers/<server-name> -> /vservers/<server-name>/var | /var/lib/vservers/<server-name> -> /vservers/<server-name>/var | ||
− | With this arrangement you could replicate the entire /vservers directory to all hosts and you will | + | But if you do this and you enable the util-vserver init script, you are likely to run into a chroot barrier problem. Since this init script sets a chroot barrier on all vservers' var directory's parent you will see something like this error message: |
+ | |||
+ | vlimit: fstat("/etc/vservers/<server-name>/rlimits"): Permission denied | ||
+ | |||
+ | One workaround to this is to simply put the vserver's var directory into a subdirectory of the vserver's combined directory like this: | ||
+ | |||
+ | /vservers/<server-name>/barrier/var | ||
+ | /var/lib/vservers/<server-name> -> /vservers/<server-name>/barrier/var | ||
+ | |||
+ | With this arrangement you could replicate the entire /vservers directory to all hosts and you will then be able to run the vserver anywhere the /vservers directory is replicated. | ||
+ | |||
+ | ===== Filesystem Fail Over ===== | ||
+ | |||
+ | Finally, you will need a mechanism to start and stop your vservers on the appropriate hosts. This is where heartbeat comes in. If you do not have a permanently mounted filesystem on each node, because maybe you are using a regular filesystem on top of a non-shared block device such as drbd, you will need to configure heartbeat to first provide the ''/vservers'' file system on the node which is going to be the active host. | ||
+ | |||
+ | ===== Multiple Vservers and Devices with DRBD 7 ===== | ||
+ | |||
+ | If you have multiple vservers which you want to be able to fail over independently from one host to another with drbd 7 you might have a hard time doing this with heartbeat. The drbd agent distributed with heartbeat tends to be focused on drbd 8, if you are using drbd 7 you are expected to be using heartbeat 1 which does not use ocf agents and does provide support for multiple independent drbd devices. Instead, you may try this custom [http://www.theficks.name/bin/lib/ocf/drbd drbd ocf agent]. Here is a sample heartbeat configuration for use with this agent: | ||
+ | |||
+ | <primitive id="vserver_foo_drbd" class="ocf" provider="bar" type="drbd"> | ||
+ | <instance_attributes id="vserver_foo_drbd_ia"> | ||
+ | <attributes> | ||
+ | <nvpair id="vserver_foo_drbd_resource" name="drbd_resource" value="vs_foo"/> | ||
+ | </attributes> | ||
+ | </instance_attributes> | ||
+ | </primitive> | ||
+ | |||
+ | ===== OCF Provider ===== | ||
+ | |||
+ | The ''ocf provider'' is simply a fancy term for the directory name under ''/usr/lib/ocf/resource.d/'' where you place your ocf agent (script). The ocf agents distributed with heartbeat are in the heartbeat subdirectory and therefor the provider for them is heartbeat. If you are adding a custom agent, you can either put it in the same directory (heartbeat) and use the heartbeat provider, or you can create a new ''provider'' (bar in the examples) and a directory for that provider ''/usr/lib/ocf/resource.d/bar''. | ||
+ | |||
+ | ===== VServer Fail Over ===== | ||
+ | |||
+ | Once you have configured your filesystem for failover you can configure the vservers themselves for failover. If you want to control more than one vserver with heartbeat, you may use the following [http://www.theficks.name/bin/lib/ocf/VServer vserver ocf agent] to do so. Be sure to specify a colocation constraint between the filesystem and your vservers. You will also need to specify an ordering constraint to be sure that the filesystem is mounted before the vservers are started. Here is a sample ocf vserver configuration for a vserver named ''foo'' and an ocf provider named ''bar'': | ||
+ | |||
+ | <primitive id="vserver_foo" class="ocf" type="VServer" provider="bar" restart_type="restart"> | ||
+ | <instance_attributes id="vserver_foo_ia"> | ||
+ | <attributes> | ||
+ | <nvpair id="vserver_foo_name" name="vserver" value="foo"/> | ||
+ | </attributes> | ||
+ | </instance_attributes> | ||
+ | </primitive> | ||
+ | |||
+ | ===== Complete VServer DRBD Example Heartbeat Config ===== | ||
+ | |||
+ | The simplest way to combine related resources in heartbeat is to use a group. With a group you do not have to specify colocation and ordering constraints, they are implied. To use the above DRBD and VServer ocf resource agents together in a group, your heartbeat configuration will look something like this: | ||
− | + | <group id="vserver_aaa"> | |
+ | |||
+ | <primitive id="vserver_foo_drbd" class="ocf" provider="bar" type="drbd"> | ||
+ | <instance_attributes id="vserver_foo_drbd_ia"> | ||
+ | <attributes> | ||
+ | <nvpair id="vserver_foo_drbd_resource" name="drbd_resource" value="vs_foo"/> | ||
+ | </attributes> | ||
+ | </instance_attributes> | ||
+ | </primitive> | ||
+ | |||
+ | <primitive id="vserver_foo_fs" class="ocf" provider="heartbeat" type="Filesystem"> | ||
+ | <instance_attributes id="vserver_foo_fs_ia"> | ||
+ | <attributes> | ||
+ | <nvpair id="vserver_foo_fs_dev" name="device" value="/dev/drbd/vs_foo"/> | ||
+ | <nvpair id="vserver_foo_fs_mount" name="directory" value="/vservers/foo"/> | ||
+ | <nvpair id="vserver_foo_fs_type" name="fstype" value="ext3"/> | ||
+ | </attributes> | ||
+ | </instance_attributes> | ||
+ | </primitive> | ||
+ | |||
+ | <primitive id="vserver_foo" class="ocf" type="VServer" provider="bar" restart_type="restart"> | ||
+ | <instance_attributes id="vserver_foo_ia"> | ||
+ | <attributes> | ||
+ | <nvpair id="vserver_foo_name" name="vserver" value="foo"/> | ||
+ | </attributes> | ||
+ | </instance_attributes> | ||
+ | </primitive> | ||
+ | |||
+ | </group> | ||
− | + | Note the use of the Filesystem agent to mount your drbd device before starting your vserver. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | [[Category:Documentation]] |
Latest revision as of 20:33, 21 October 2011
Contents |
[edit] Active VServer Failover
To perform active vserver failover without user intervention from one host to another you may use heartbeat. To do this, you must first have a mechanism to actively replicate your vserver filesystem and configuration from one host to another. This mechanism must be able to provide a consistent filesystem view to either host on demand (but not necessarily at the same time). In other words you may use something like NFS, a clustered filesystem like OCFS2 or GFS, or a network replicated block device like drbd, but you cannot use something like rsync, scp or ftp.
[edit] Organizing your VServer Directories
Once you have an active replication method, you will likely need to organize your vserver files to be on the same device/filesystem so that you only need one replicated device/filesystem and do not need a separate one just for your vserver configuration files. One way to do this would be to use a /vservers mount point and to have a subdirectory for each vserver in there: /vservers/<server-name>. If you want to put both the /var and /etc sections of your vserver in the vserver's subdirectory and soft link to them you may be tempted to try this arrangement:
/vservers/<server-name>/etc /vservers/<server-name>/var
/etc/vservers/<server-name> -> /vservers/<server-name>/etc /var/lib/vservers/<server-name> -> /vservers/<server-name>/var
But if you do this and you enable the util-vserver init script, you are likely to run into a chroot barrier problem. Since this init script sets a chroot barrier on all vservers' var directory's parent you will see something like this error message:
vlimit: fstat("/etc/vservers/<server-name>/rlimits"): Permission denied
One workaround to this is to simply put the vserver's var directory into a subdirectory of the vserver's combined directory like this:
/vservers/<server-name>/barrier/var /var/lib/vservers/<server-name> -> /vservers/<server-name>/barrier/var
With this arrangement you could replicate the entire /vservers directory to all hosts and you will then be able to run the vserver anywhere the /vservers directory is replicated.
[edit] Filesystem Fail Over
Finally, you will need a mechanism to start and stop your vservers on the appropriate hosts. This is where heartbeat comes in. If you do not have a permanently mounted filesystem on each node, because maybe you are using a regular filesystem on top of a non-shared block device such as drbd, you will need to configure heartbeat to first provide the /vservers file system on the node which is going to be the active host.
[edit] Multiple Vservers and Devices with DRBD 7
If you have multiple vservers which you want to be able to fail over independently from one host to another with drbd 7 you might have a hard time doing this with heartbeat. The drbd agent distributed with heartbeat tends to be focused on drbd 8, if you are using drbd 7 you are expected to be using heartbeat 1 which does not use ocf agents and does provide support for multiple independent drbd devices. Instead, you may try this custom drbd ocf agent. Here is a sample heartbeat configuration for use with this agent:
<primitive id="vserver_foo_drbd" class="ocf" provider="bar" type="drbd"> <instance_attributes id="vserver_foo_drbd_ia"> <attributes> <nvpair id="vserver_foo_drbd_resource" name="drbd_resource" value="vs_foo"/> </attributes> </instance_attributes> </primitive>
[edit] OCF Provider
The ocf provider is simply a fancy term for the directory name under /usr/lib/ocf/resource.d/ where you place your ocf agent (script). The ocf agents distributed with heartbeat are in the heartbeat subdirectory and therefor the provider for them is heartbeat. If you are adding a custom agent, you can either put it in the same directory (heartbeat) and use the heartbeat provider, or you can create a new provider (bar in the examples) and a directory for that provider /usr/lib/ocf/resource.d/bar.
[edit] VServer Fail Over
Once you have configured your filesystem for failover you can configure the vservers themselves for failover. If you want to control more than one vserver with heartbeat, you may use the following vserver ocf agent to do so. Be sure to specify a colocation constraint between the filesystem and your vservers. You will also need to specify an ordering constraint to be sure that the filesystem is mounted before the vservers are started. Here is a sample ocf vserver configuration for a vserver named foo and an ocf provider named bar:
<primitive id="vserver_foo" class="ocf" type="VServer" provider="bar" restart_type="restart"> <instance_attributes id="vserver_foo_ia"> <attributes> <nvpair id="vserver_foo_name" name="vserver" value="foo"/> </attributes> </instance_attributes> </primitive>
[edit] Complete VServer DRBD Example Heartbeat Config
The simplest way to combine related resources in heartbeat is to use a group. With a group you do not have to specify colocation and ordering constraints, they are implied. To use the above DRBD and VServer ocf resource agents together in a group, your heartbeat configuration will look something like this:
<group id="vserver_aaa"> <primitive id="vserver_foo_drbd" class="ocf" provider="bar" type="drbd"> <instance_attributes id="vserver_foo_drbd_ia"> <attributes> <nvpair id="vserver_foo_drbd_resource" name="drbd_resource" value="vs_foo"/> </attributes> </instance_attributes> </primitive> <primitive id="vserver_foo_fs" class="ocf" provider="heartbeat" type="Filesystem"> <instance_attributes id="vserver_foo_fs_ia"> <attributes> <nvpair id="vserver_foo_fs_dev" name="device" value="/dev/drbd/vs_foo"/> <nvpair id="vserver_foo_fs_mount" name="directory" value="/vservers/foo"/> <nvpair id="vserver_foo_fs_type" name="fstype" value="ext3"/> </attributes> </instance_attributes> </primitive> <primitive id="vserver_foo" class="ocf" type="VServer" provider="bar" restart_type="restart"> <instance_attributes id="vserver_foo_ia"> <attributes> <nvpair id="vserver_foo_name" name="vserver" value="foo"/> </attributes> </instance_attributes> </primitive> </group>
Note the use of the Filesystem agent to mount your drbd device before starting your vserver.