Difference between revisions of "Getting high with lenny"

From Linux-VServer

Jump to: navigation, search
(New page: ====== Getting High with Lenny ====== The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released) There is a lot of buzz...)
 
Line 273: Line 273:
 
<code>
 
<code>
 
vgscan
 
vgscan
 +
</code>
 +
 +
==== Create the Physical Volume ====
 +
 +
The following only needs to be done on the node that is the primary!!
 +
 +
On node1
 +
<code>
 +
pvcreate /dev/drbd0
 +
</code>
 +
 +
==== Create the Volume Group ====
 +
 +
The following only needs to be done on the node that is the primary!!
 +
 +
One node1
 +
<code>
 +
vgcreate drbdvg0 /dev/drbd0
 +
</code>
 +
 +
==== Create the Logical Volume ====
 +
 +
Yes, again only on the node that is primary!!!
 +
 +
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.
 +
On node1
 +
<code>
 +
lvcreate -L50000 -n web drbdvg0
 +
</code>
 +
 +
Then we put a file system on the logical volumes
 +
<code>
 +
mkfs.ext3 /dev/drbdvg0/web
 +
</code>
 +
 +
create the directory where we want to mount the Vservers
 +
<code>
 +
mkdir -p /VSERVERS/web
 +
</code>
 +
and mount the volume group to the mount point
 +
<code>
 +
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/
 +
</code>
 +
 +
===== Get informed =====
 +
 +
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.
 +
 +
This should be done on both nodes
 +
 +
<code>
 +
apt-get install postfix mailx
 +
</code>
 +
 +
and go for the defaults, "internet site" and node1.example.com"
 +
 +
We don't want postfix to listen to all interfaces,
 +
<code>
 +
nano /etc/postfix/main.cf
 +
</code>
 +
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.
 +
 +
<code>
 +
inet_interfaces = loopback-only
 
</code>
 
</code>

Revision as of 16:07, 1 October 2008

Contents

Getting High with Lenny

The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)


There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking.

I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed. DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.

The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made offcourse. For this set up we will have 1 single large DRBD device here with 2 machines in a primary/seconday setting and we use LVM to provide some partitioning on top of DRBD to place the Vservers on. Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.

 **Fail-over)

The partitioning looks as follows

     c0d0p1             Boot              Primary         Linux ext3                                        10001.95
     c0d0p5                               Logical         Linux swap / Solaris                               1003.49
     c0d0p6                               Logical         Linux                                            280325.77


<note> machine1 will use the following names

 * hostname = node1
 * IP number = 192.168.1.100 
 * is primary for r0 on disk c0d0p6
 * physical volume on r0 is /dev/drbd0
 * volume group on /dev/drbd0 is called drbdvg0

</note>

<note> machine2 will use the following names

 * hostname = node2
 * IP number = 192.168.1.200 
 * is secondary for r0 on disk c0d0p6

The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0. </note>

Loadbalance-Failover the network cards

Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here.

 *[[1]]

We need both mii-tool and ethtool. apt-get install ethtool ifenslave-2.6

nano /etc/modprobe.d/arch/i386

To load the modules with the correct options at boot time. alias bond0 bonding options bond0 mode=balance-alb miimon=100

And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine. nano /etc/network/interfaces

  1. This file describes the network interfaces available on your system
  2. and how to activate them. For more information, see interfaces(5).
  1. The loopback network interface

auto lo iface lo inet loopback

  1. The primary network interface

auto bond0 iface bond0 inet static

       address 123.123.123.100
       netmask 255.255.255.0
       network 123.123.123.0
       broadcast 123.123.123.255
       gateway 123.123.123.1
       # dns-* options are implemented by the resolvconf package, if installed
       dns-nameservers 123.123.123.45
       dns-search example.com
       up /sbin/ifenslave bond0 eth0 eth1
       down ifenslave -d bond0 eth0 eth1


auto eth2 iface eth2 inet static

       address 192.168.1.100
       netmask 255.255.255.0

<note> This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step. </note>

Install the Vserver packages

apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools As usual a reboot is needed to boot this kernel.

<note> With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location. </note>

Install DRBD8, LVM2 and Heartbeat

apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2. <note> not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade. </note>

Build DRBD8

Although packages are available in the repositorie dor DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.

To do this we just issue this command m-a a-i drbd8 And to load it into the kernel.. depmod -ae modprobe drbd

Configure DRBD8

Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.

mv /etc/drbd.conf /etc/drbd.conf.original nano /etc/drbd.conf global {

       usage-count no;

}

common {

 syncer { rate 100M; }                                                                                            

}

resource r0 {

 protocol C;
 handlers {
   pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt  f";
   pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";
   local-io-error "echo o > /proc/sysrq-trigger ; halt f";
   outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
 }
 startup {
   degr-wfc-timeout 120;    # 2 minutes.
 }
 disk {
   on-io-error   detach;
 }
 net {                   
   after-sb-0pri disconnect;
   after-sb-1pri disconnect;
   after-sb-2pri disconnect;
   rr-conflict disconnect;
 }
   
 syncer {
   rate 100M;
   al-extents 257;
 }


       on node1 {
               device     /dev/drbd0;
               disk       /dev/cciss/c0d0p6;
               address    192.168.1.100:7788;
               meta-disk  internal;
       }
       on node2 {
               device     /dev/drbd0;
               disk       /dev/cciss/c0d0p6;
               address    192.168.1.200:7788;
               meta-disk  internal;
       }

}

chgrp haclient /sbin/drbdsetup chmod o-x /sbin/drbdsetup chmod u+s /sbin/drbdsetup chgrp haclient /sbin/drbdmeta chmod o-x /sbin/drbdmeta chmod u+s /sbin/drbdmeta

On both nodes

node1 drbdadm create-md r0

node2 drbdadm create-md r0

node1 drbdadm up r0

node2 drbdadm up r0 <note warning> The following should be done on the node that will be the primary </note> On node1 drbdadm -- --overwrite-data-of-peer primary r0


watch cat /proc/drbd should show you something like this version: 8.0.13 (api:86/proto:86) GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07

0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
   ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0

[===>................] sync'ed: 22.1% (208411/267331)M finish: 4:04:44 speed: 14,472 (12,756) K/sec resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172 act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102



Configure LVM2

<note important> LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices. So to limit it to scan /dev/drbd devices only we do the following on both nodes.

</note> cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original nano /etc/lvm/lvm.conf

   #filter = [ "a/.*/" ]
   filter = [ "a|/dev/drbd|", "r|.*|" ]

to re-scan with the new settings on both nodes vgscan

Create the Physical Volume

The following only needs to be done on the node that is the primary!!

On node1 pvcreate /dev/drbd0

Create the Volume Group

The following only needs to be done on the node that is the primary!!

One node1 vgcreate drbdvg0 /dev/drbd0

Create the Logical Volume

Yes, again only on the node that is primary!!!

For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on. On node1 lvcreate -L50000 -n web drbdvg0

Then we put a file system on the logical volumes mkfs.ext3 /dev/drbdvg0/web

create the directory where we want to mount the Vservers mkdir -p /VSERVERS/web and mount the volume group to the mount point mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/

Get informed

Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.

This should be done on both nodes

apt-get install postfix mailx

and go for the defaults, "internet site" and node1.example.com"

We don't want postfix to listen to all interfaces, nano /etc/postfix/main.cf and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.

inet_interfaces = loopback-only

Personal tools