Proxmox VE

By jbayer - Last updated: Tuesday, June 7, 2011 - Save & Share - 4 Comments

This article will cover the Proxmox VE product in some depth.

1.  Intro
2.  Initial install
3.  Setting up a cluster
4.  Convert server to RAID-1
5. Template Naming Conventions
6. Installing a Lucid container
7. Installing a RedHat, Centos or Scientific Linux container
8. Add additional storage to Proxmox VE
9. Available templates
10. Additional reference

 

1.  Intro

Proxmox VE is a great product.  It allows you to get the fullest advantage of your hardware, by allowing you to have both a fully-virtualized environment, and a container-based environment for those servers which don’t need a complete environment.  Using Proxmox VE enables you to take an existing system and repurpose it without any significant expense.

However, IMHO it has a major flaw, in that it doesn’t support any sort of software RAID.  RAID is absolutely critical in a production environment, and given the low costs of hardware these days, there is no excuse not to protect your systems with a minimal RAID 1 setup.

The Proxmox team does not support any type of software raid.  Following these instructions will essentially make your system ineligible for support.  One way around this is to go with a true hardware raid solution.  Unfortunately, most good hardware raid adds significantly to the cost of a system.

To address this, I’ve written the attached script which will take a basic Procmox VE installation and convert it to a RAID-1 setup.

 

2.  Initial Install

When doing the initial install, you can specify the size of the root and swapspace by typing:

linux maxroot=15gb swapspace=4gb

This line would create a root partition of 15 GB, and 4 GB of swapspace.

The Proxmox VE install is very simple, and needs no additional explanation.  I would suggest that if you have multiple drives in the system, that you do the initial install with only the boot drive connected.  I’ve seen some cases where the installer seems to get confused regarding the boot sectors.

When the install is completed, you are ready to go.  However, if  you want to be able to migrate  containers,  you will need to downgrade the kernel to version 2.6.18 using the following instructions:

http://pve.proxmox.com/wiki/Proxmox_VE_Kernel

aptitude update
aptitude safe-upgrade
aptitude install proxmox-ve-2.6.18

Then modify /boot/grub/menu.lst to make sure the new kernel is the default.

sed -i 's/default.*0/default 1/g' /boot/grub/menu.lst

3. Setting up a cluster

These instructions are taken from the Proxmox site at:

http://pve.proxmox.com/wiki/Proxmox_VE_Cluster

Install  Proxmox on all systems to be contained in the cluste.  Make sure that each Proxmox server has a unique host name.  Once installed, first create the cluster on the master by using:

pveca -c

To check the state of the cluster:

pveca -l

On the slave nodes, use the following command to add them to the cluster:

pveca -a -h IP-ADDRESS-MASTER

These instructions are summarized in this download:

  Basic installation & configuration of a Proxmox cluster (1.2 KiB, 919 hits)

 

4.  Convert server to RAID-1

I relied extensively on the following two webpages:

http://www.petercarrero.com/content/2010/07/31/adding-software-raid-proxmox-ve-install

http://layer0.de/~kai/howto/proxmox/howto_proxmox_raid.html

However, those pages only gave the basic commands, and didn’t take any oddball situations into account, such as if the boot drives weren’t sda and sdb, other drives, etc.

My script also checks to see if you want to be able to migrate containers.  The migration doesn’t work for any kernel later than 2.6.18, so the script will install it for you if you like.

  Convert Proxmox to RAID-1 (5.0 KiB, 738 hits)

 

5. Template Naming Conventions

In addition to the templates available from the Proxmox website, there are quite a number of usable templates available from the OpenVZ website:  http://wiki.openvz.org/Download/template/precreated.  However, the naming conventions used by Proxmox are different from the naming conventions used by OpenVZ.  To solve this problem, I’ve written a simple script which will take as input the name of an OpenVZ template and rename it with the Proxmox naming conventions.

  Convert OpenVZ template name to Proxmox VE (1.7 KiB, 681 hits)

 

6. Installing a Lucid container

Ubuntu  is currently the most popular desktop distribution.  Ubuntu also has a server version, which is quite popular as well.  For production environments, the LTS version of Ubuntu is usually used for stability.  The most recent LTS version of Ubuntu is 10.04, also known as Lucid.

This script will take a basic Lucid container, and set it up for the most popular components:  Apache, MySql, Postgresql 8.3 and Postgresql 8.4.  Each is optional.  To use, simply copy the script to the container, log in as root and run it.  If desired, you can have both versions of Postgresql installed at the same time.

  Script to set up a Lucid container, with optional components (3.6 KiB, 607 hits)

 

7. Installing a RedHat, Centos or Scientific Linux container

Redhat and it’s derivatives are among the most popular distributions in large production environments.  This script will take a basic RH container, and set it up for the most popular components:  Apache, MySql, Postgresql 8.3 and Postgresql 8.4.  You can only install one version of Postgresql on the system.  Each is optional.  To use, simply copy the script to the container, log in as root and run it.

  Set up a RH-based container, with optional components (3.9 KiB, 776 hits)

 

8. Add additional storage to Proxmox VE

Proxmox doesn’t have any special tools to help add additional storage to the system.  However, it uses LVM, which makes adding the additional storage fairly easy.

This script walks you through adding additional storage.  It is meant to be run after the physical storage has been added or made available.  It also assumes that any additional drives are not in use, and it will use the entire drive, so don’t use it to add storage from an existing drive.

It can also set up two new drives in a RAID-1 setup.  In this case, both drives must be identical in size.

Note that additional volume groups are only usable for virtual disks, aka KVM instances.  If you wish to add this additional storage for OpenVZ containers, do not add it as a new volume group.  The script will take care of adding it to the standard volume group pve and resizing the filesystem.  If the filesystem is formatted as either ext3 or reiserfs, you will have to shut down all instances before running this script so that the filesystem can be resized.  If, however, you are using xfs or jfs, you don’t have to shutdown the instances.

Important:  If you add additional storage to the system and add it to the standard volume group pve, you must be aware of the fact that if any of the logical drives goes bad or is missing, the system will refuse to boot.  Raid will protect you to some extent, in that if one raid drive goes bad the other should keep things running.  Raid is not an alternative to backups;  I’ve seen many cases where multiple raid drives go bad at the same time.  If you aren’t monitoring the raid system and one drive goes bad, then you would be in a degraded mode until you replace the bad drive.  If you aren’t aware of the bad drive, and then the other goes bad, you lose your entire system.

  Add additional storage to a Proxmox server (5.2 KiB, 778 hits)

 

9. Additional templates

In addition to the templates available from the proxmox site via the Download tab, there are a lot of container templates available on the OpenVZ site at:

http://wiki.openvz.org/Download/template/precreated

As stated before, they all work, but will need to be renamed.  Use this script to rename any downloaded templates to Proxmox naming standards:

  Convert OpenVZ template name to Proxmox VE (1.7 KiB, 681 hits)

 

10.  Additional reference

For additional reference for using Proxmox with DRBD, see:

https://188.165.145.220/mediawiki/index.php?title=DRBD&redirect=no

Posted in Virtulization • • Top Of Page
5,322 views

4 Responses to “Proxmox VE”

Comment from Ben J
Time May 28, 2012 at 8:43 pm

great writeup, in your script in Step 4 you need after line 186 add
“vgreduce pve $initlvm”

Comment from jbayer
Time May 28, 2012 at 8:45 pm

Thank you. I’ve changed jobs since I wrote this, and am not currently using Proxmox, but appreciate improvements like this.

Comment from Ben J
Time August 30, 2012 at 10:07 pm

Thanks for amending the script, its been very handy and im sure will help many others going forward.

Here’s another work around for a bug with pvmove which sometimes deadlocks and ends badly for the machine. (tested in latest proxmox 2.1-f9b0f63a-26)

#######
# pvmove can deadlock and freeze.
# use lvconvert to setup a lvm mirror
#######
#pvmove -v $initlvm
date;
lvconvert -m 1 /dev/pve/data /dev/md1;
lvconvert -m 1 /dev/pve/root /dev/md1;
lvconvert -m 1 /dev/pve/swap /dev/md1;
date;
### Remove original disk from the LVM mirror.
lvconvert -m 0 /dev/pve/data $initlvm
lvconvert -m 0 /dev/pve/root $initlvm
lvconvert -m 0 /dev/pve/swap $initlvm

vgreduce pve $initlvm

Comment from jbayer
Time August 31, 2012 at 9:32 am

Thanks.

I’m not using proxmox these days, so I appreciate this.

Write a comment

asd