Difference between revisions of "Advanced DRBD mount issues"
From Linux-VServer
(+links, spelling, wiki) |
(→Solution 4: Modifying Heartbeat's Drbddisk Script: +link) |
||
Line 121: | Line 121: | ||
== Solution 4: Modifying Heartbeat's Drbddisk Script == | == Solution 4: Modifying Heartbeat's Drbddisk Script == | ||
− | This solution/principle is valid for the combination of Vserver + DRBD + Heartbeat, where the latter is used to transfer virtual servers between the nodes of a HA cluster. "Out of the box", Heartbeat will reboot the cluster node if it cannot unmount the vserver mount point when it tries to shut down a virtual server. Unfortunately, this happens rather often if there is more than one virtual server on a cluster. Every time a vserver is about to be shut down, the vserver itself is getting stopped, then its file system is unmounted, and the underlaying DRBD device is set into "secondary" state (to allow the other node of the cluster to take over the DRBD block device). Now, if there are any references to the vserver moint point remaining in the namespaces of any other running virtual server (copies from the master namespace when a vserver is started), switching the DRBD device into "secondary" mode will fail, and, alas, unmounting the mount point consequentially also. There goes your cluster node... | + | This solution/principle is valid for the combination of Vserver + DRBD + Heartbeat, where the latter is used to transfer virtual servers between the nodes of a [http://en.wikipedia.org/wiki/High_availability HA] cluster. "Out of the box", Heartbeat will reboot the cluster node if it cannot unmount the vserver mount point when it tries to shut down a virtual server. Unfortunately, this happens rather often if there is more than one virtual server on a cluster. Every time a vserver is about to be shut down, the vserver itself is getting stopped, then its file system is unmounted, and the underlaying DRBD device is set into "secondary" state (to allow the other node of the cluster to take over the DRBD block device). Now, if there are any references to the vserver moint point remaining in the namespaces of any other running virtual server (copies from the master namespace when a vserver is started), switching the DRBD device into "secondary" mode will fail, and, alas, unmounting the mount point consequentially also. There goes your cluster node... |
To cirumvent this problem I wrote a little script, which should be hooked into the Heartbeat DRBD control script "/usr/etc/ha.d/resource.d/drbddisk", right before the line "exec $DRBDADM secondary $RES" in the "stop" branch. | To cirumvent this problem I wrote a little script, which should be hooked into the Heartbeat DRBD control script "/usr/etc/ha.d/resource.d/drbddisk", right before the line "exec $DRBDADM secondary $RES" in the "stop" branch. |
Latest revision as of 21:31, 21 October 2011
Contents |
[edit] Advanced DRBD mount issues
This is currently under construction - think before using it
This HowTo? covers the problem with multiple vServer depending on multiple DRBD mounted devices like discussed on the mailinglist in August 2005.
[edit] Problem
You run more than one vServer guest and have more than one DRBD device on your host system. You are now unable to unmount the drbd devices and always get messages about "filesystem in use".
This is because whenever you start a new vServer, the kernel's mount table is copied into the new namespace and thus, you also copy the references on the DBRD mounts which than cannot be shutdown.
[edit] Solution 1: mount per vServer
This approach is the favourised one, if you have a setup like mine:
I run the root partition of the server on a non-drbd device and mount one drbd partition as data-storage inside each vServer.
All you have to do is to mount the drbd device inside the fstab of the vServer:
<vserver>/etc/fstab
none /proc proc defaults 0 0 none /tmp tmpfs size=16m,mode=1777 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/drbd/www1 /data ext3 none 0 0
This will result in the DRBD device mounted on /data inside the vServer and only be visible inside this namespace. So the mount is not copied to other vServers outside and thus, if you shutdown this instance you immediately free the DRBD device and can shut it down.
Note: You cannot bind-mount on this mountpoint from inside the fstab because of different visibility of the nodes - if you need bind-mount on this device see the other approaches.
[edit] Solution 2: script based positive mounting
vServers have a script architecture that enables you to do some things during startup and shutdown.
positive mounting is similar to the above approach, with the difference, that the mount operation is done via a script.
Put your mount command into the file /etc/vservers/<servername>/scripts/prepre-start - note that the file must not have the x-bit set!
#/bin/bash mount -t ext3 /dev/drbd/www1 /vservers/www1/data
Note that the mountpoint is referenced against the root-servers filesystem!
If you have all identical vServer and want to this for all guests you can put the prepre-start file to /etc/vservers/.defaults/scripts, you can grab the name of the vServer through the shell arg $2.
As the mounting is done prior execution of the fstab but running in the right namspace already, you can now do bind-mounting inside the vServer's fstab
/vservers/www1/data/webtree /webtree none bind /vservers/www1/data/var /var none bind
Note that the mount source is relative to the root-fs again while the target is relative to the guest root.
[edit] Solution 3: script based unmounting (bit of brute-force....)
Using the script architecture mentioned above, we now force an unmount of certain DRBD devices when fireing up the server.
I got a draft version of the attached script from a guy from the mailing list. It worked but I didn't like the approach and worked out the other ones - it might be helpful if you use one drbd device inside multiple vservers and so you cant use the above mentioned ideas.
The script first tries to detect the mount-points occupied by the current vServer, than runs through all mounts and unmounts all that are not related to the current vServer. Please see the script as an idea - it might not work out of the box for you because you have to adjust the detection of the occupied mounts.
The code must go into the prepre-start file as mentioned above.
#!/bin/bash # published version done by Oliver Welter, mail-at-oliwel.de # based on a script from martin rueegg, metaworx.ch, mrueegg-at-metaworx.ch # provide without waranty as is and free to modify and copy with this notice kept intact DF=/bin/df CUT=/bin/cut TAIL=/usr/bin/tail GREP=/bin/grep CAT=/bin/cat # I dont mirror the vServer's itself, just the data, so all vServers share one volume - I think most people must adjust this vs_dir=/vservers vs_etc=/etc/vservers vs_data=/data/www1 # get the device, the vserver is located on vs_device=`$DF -kh $vs_dir | $CUT -f1 -d' ' | $TAIL -n 1` # get the device, the vserver config dir is located on vs_etc_mount=`$DF -kh $vs_etc | $TAIL -n 1 | $GREP -Eo '[^[:space:]]+$'` vs_data_device=`$DF -kh $vs_data | $CUT -f1 -d' ' | $TAIL -n 1` for i in `$CAT /proc/mounts \ | $CUT -f1,2 -d' ' --output-delimiter='|' \ | $GREP -E '^/dev/drbd/[^|]+\|'`; do # extract the device device=`echo $i | $CUT -f1 -d'|'` # extract the mountpoint mountpoint=`echo $i | $CUT -f2 -d'|'` # unmount the file system unless it's the # - device the vserver's on # - device the vserver config dir is on if ! [ ."$vs_device" == ."$device" -o ."$vs_etc_mount" == ."$mountpoint" -o ."$vs_data_device" == ."$device" ] ; then echo "umount -nv $mountpoint" echo `umount -nv $mountpoint || exit $?` fi done
[edit] Solution 4: Modifying Heartbeat's Drbddisk Script
This solution/principle is valid for the combination of Vserver + DRBD + Heartbeat, where the latter is used to transfer virtual servers between the nodes of a HA cluster. "Out of the box", Heartbeat will reboot the cluster node if it cannot unmount the vserver mount point when it tries to shut down a virtual server. Unfortunately, this happens rather often if there is more than one virtual server on a cluster. Every time a vserver is about to be shut down, the vserver itself is getting stopped, then its file system is unmounted, and the underlaying DRBD device is set into "secondary" state (to allow the other node of the cluster to take over the DRBD block device). Now, if there are any references to the vserver moint point remaining in the namespaces of any other running virtual server (copies from the master namespace when a vserver is started), switching the DRBD device into "secondary" mode will fail, and, alas, unmounting the mount point consequentially also. There goes your cluster node...
To cirumvent this problem I wrote a little script, which should be hooked into the Heartbeat DRBD control script "/usr/etc/ha.d/resource.d/drbddisk", right before the line "exec $DRBDADM secondary $RES" in the "stop" branch.
It removes the mount point of the virtual server that is about to be shut down from all running virtual server contextes:
VNSPACE=/usr/local/sbin/vnamespace for CTX in `/usr/local/sbin/vserver-stat | tail +3 | awk '{print $1}'` do MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`" echo Unmounting mount point $MPOINT from within context $CTX ### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!! $VNSPACE -e $CTX /bin/umount $MPOINT || continue; done # here shall be the original line then (uncommented, of course ;-)) # exec $DRBDADM secondary $RES