Help Linux - до свидания! См. Новости проекта

You are here: start » en » kb » proxmox-drbd-cluster


|

Дополнительно

 Creative Commons

Proxmox VE 2.0 + DRBD cluster installation

Introduction

Main purpose of this configuration

  • Reliable virtualization platform based on only two hardware nodes with ability to use “online migration”

 Proxmox + DRBD Cluster






















Minimum requirements

  • Two PC/Servers with:
    1. AMD-V or VT-x support
    2. At least 2GB RAM
    3. 2 HDD (first for Proxmox, second for DRBD)
    4. Single 1Gbit/s network adapter
    5. Accessible NTP server (for example: ntp.company.lan)

Recommended requirements

  • In addition to minimum requirements:
    1. one or two extra network adapters for each PC/Server (for DRBD communications) and direct connection PC-to-PC on these ports (without any switches); use round-robin bonding in case of two adapters
    2. one extra network adapter for virtual machines
    3. Fast RAID configuration with BBU as a replacement to second HDD (/dev/sdb)

Cluster installation

  1. Install two nodes (virt1.company.lan / 10.10.1.1 and virt2.company.lan / 10.10.1.2) and login with user “root”
  2. Update both nodes (highly recommended): aptitude update && aptitude full-upgrade
  3. On virt1:
    1. add “server ntp.company.lan” to /etc/ntp.conf
    2. /etc/init.d/ntp restart
    3. ntpdc -p
    4. pvecm create cluster1
    5. pvecm status
  4. On virt2:
    1. add “server ntp.company.lan” to /etc/ntp.conf
    2. /etc/init.d/ntp restart
    3. ntpdc -p
    4. pvecm add 10.10.1.1
  5. DONE!

DRBD installation

The following steps are supposed to be done on both nodes (identically)

  1. Create partition /dev/sdb1 using “fdisk /dev/sdb”. PARTITIONS ON BOTH NODES MUST BE EXACTLY THE SAME SIZE.
  2. aptitude install drbd8-utils
  3. Replace file /etc/drbd.d/global_common.conf with
    global { usage-count no; }
    common { syncer { rate 30M; verify-alg md5; } }
  4. Add file /etc/drbd.d/r0.res
    resource r0 {
        	protocol C;
        	startup {
                	wfc-timeout 0;    # non-zero might be dangerous
                	degr-wfc-timeout 60;
                	become-primary-on both;
        	}
        	net {
                	cram-hmac-alg sha1;
                	shared-secret "oogh2ouch0aitahNBLABLABLA";
                	allow-two-primaries;
                	after-sb-0pri discard-zero-changes;
                	after-sb-1pri discard-secondary;
                	after-sb-2pri disconnect;
        	}
        	on virt1 {
                	device /dev/drbd0;
                	disk /dev/sdb1;
                	address 10.10.1.1:7788;
                	meta-disk internal;
        	}
        	on virt2 {
                	device /dev/drbd0;
                	disk /dev/sdb1;
                	address 10.10.1.2:7788;
                	meta-disk internal;
        	}
    }
  5. /etc/init.d/drbd start
  6. drbdadm create-md r0
  7. drbdadm up r0
  8. cat /proc/drbd # to check if r0 is available

Following steps are supposed to be done only on ONE node

  1. drbdadm -- --overwrite-data-of-peer primary r0
  2. watch cat /proc/drbd # to monitor synchronization process
  3. finally both nodes become primary and uptodate (Primary/Primary UpToDate/UpToDate), but actually no reason to wait until they are synced (very long), /dev/drbd0 is already available on both nodes and we can go to the next step - create LVM on top of DRBD
  4. DONE!

LVM on top of DRBD configuration

Following steps are supposed to be done on both nodes (identically)

  1. change /etc/lvm/lvm.conf
    --- /etc/lvm/lvm.conf.orig    2012-03-09 12:58:48.000000000 +0400
    +++ /etc/lvm/lvm.conf    2012-04-06 18:00:32.000000000 +0400
    @@ -63,7 +63,8 @@
     
     
     	# By default we accept every block device:
    -	filter = [ "a/.*/" ]
    +	#filter = [ "a/.*/" ]
    +	filter = [ "r|^/dev/sdb1|", "a|^/dev/sd|", "a|^/dev/drbd|" ,"r/.*/" ]
     
     	# Exclude the cdrom drive
     	# filter = [ "r|/dev/cdrom|" ]

Following steps are supposed to be done on any single node

  1. pvcreate /dev/drbd0
  2. pvscan # in order to check
  3. vgcreate drbdvg /dev/drbd0
  4. pvscan # in order to check
  5. DONE!

Create first Virtual Machine

  1. go to http://10.10.1.1 and login with root
  2. Data Center → Storage → Add → LVM group
    • ID: drbd
    • Volume group: drbdvg
    • Shared: yes
  3. Create VM (right top corner)
    1. choose appropriate settings for a new VM until Hard Disk tab
    2. Hard Disk → Storage: drbd
    3. Finish creation
  4. DONE! Since now we can start VM, install operation system and play with online-migration

Links