I normally use CentOS 5.0 as the operating system of the servers that I build for myself and my clients. This page details how I set up CentOS using LVM, the Linux Volume Manger system, to manage the disks. The advantage to me is that I can later create as many partitions as I need for other things, such as xenU clients.
When I build a "multi-headed" server running Xen, I usually set things up so that the xen0 session will ONLY be used to manage the other xen sessions. The idea is that the actual "worker" sessions, which provide services to other machines, should be running in other child sessions, so that they can't accidentally change anything on the hardware or on other client sessions.
The examples shown below are from the server I'm building for myself in 2007-12, which may be serving the page you're reading right now. The machine is a Compaq DL360 with dual 1GHz processors, 3.5GB RAM, and dual 147GB 10Krpm disks configured as a RAID-1 container using the hardware RAID controller. It has two ethernet interfaces, one connected to the Internet and the other with nothing plugged into it.
The Xen configuration looks like this:
There are two network bridges- one attached to eth0 so that the clients can access the Internet, and one attached to eth1 (which has no wire plugged into it) to serve as a private network segment within the machine itself. The Xen documentation mentions that this is possible, but doesn't explain how to set it up. A google search turned up this web page which explained how to set it up.
The xen0 session is very small- 15GB for the root filesystem and 512MB swap space. It only runs an ssh service on a non-standard port number, a cron job which downloads a mirror of the CentOS "base" and "updates" repositories, and an FTP server bound to the IP address on the internal segment, which allows the child sessions to do updates without the machine downloading multiple copies of each package, and without making my machine into a global CentOS mirror. (Not that I would mind doing it, but I don't have the bandwidth available to support it.)
The first xenU session will be my own server, where I will be serving my web pages and handling my email. If you're reading this page after mid-December 2007, chances are this is the session which served you the web page.
Other xenU sessions are used by my friends and clients.
LVM works using three layers of containers:
A PV (physical volume) is a physical device, usually a disk partition, which is available to hold data.
A VG (volume group) is a "virtual disk" which is made up of one or more PVs, and is then split between LVs. Think of it as a "pool" of disk space from which you can create partitions. If you add physical disks to a system, you can create PVs on those disks, and add those PVs to the existing VG, giving you more space to build or expand your LVs.
A LV (logical volume) is a "virtual partition" which usually contains a filesystem. Think of it the same way you would think of a normal partition on a disk, except that you aren't limited to only four of them, and they don't exist within a specific disk- they exist within a VG and can therefore span multiple physical disks.
The only real problem I have encountered with LVM is that the grub bootloader does not know how to use LVM, and therefore cannot boot from an LVM logical volume. For this reason we will be creating one physical partition for "/boot", and using the remainder of the drive as an LVM physical volume.
There is a lot more to say about LVM, but this page isn't the place for it. Here is a list of links where you can learn more about LVM:
We will be building a very simple LVM system- one PV, one VG, and (for now) two LVs. Additional xenU sessions will involve creating more LVs, for which we will leave room. If you physically run out of space, you can add a second physical disk to the machine and then add its space to be used for any new LVs you may create.
The CentOS installer sets the system up to use LVM automatically. However, it uses container names like "VolGroup00", which I don't like. In order to control the names of the various containers, I normally create the LVM structures by hand.
I prefer using the CentOS text-mode installer- that's just a personal preference. I've been doing it since the days of RedHat 5.1 and it's what I'm used to using. These steps can be followed using the graphical installer as well, but the example will walk through using the text-mode installer.
The first step, of course, is to boot the first CD and tell it to use the text-mode installer. I normally add "vga=791" to the mix, because I like how a 128x48 console looks on the screen.
boot: linux text vga=791
The installer starts by asking whether I want to test my installation CD's. It shows a window with two buttons, "Check" and "Skip". I always test the CDs the first time I use them, but once I've used them to do a successful install, I don't bother testing them again.
Using the text-mode windows: To use the text-mode
windows (which are generated by the "ncurses" library) you can use the TAB key
to move from one interface item to the next, the directional arrows to select
between different items in a list, and the SPACE BAR to "push" a button.
Which means in this case, to skip the test I pressed TAB (to move to the
"Skip" button) and then SPACE BAR (to "press" the button.)
When the installer starts, the first question it asks is what language to use. At this point, the necessary tools are present and we can manually set up the LVM containers the way we want them.
In order to do this, we need to access a command line. The installer runs one on TTY 2. You can access this by pressing ALT-F2 (or if you're using the GUI installer, CTRL-ALT-F2.)
The first step is to physically partition the disk. This sets up the two physical partitions- a 150MB partition for "/boot", and the balance of the disk for what will be an LVM PV (physical volume).
My server's SCSI controller is a Compaq hardware RAID controller, using the "cpqarray" driver. It presents the RAID containers to the kernel as entries within the "/dev/ida/" tree. Your system may use something like "/dev/hda" or "/dev/sda" for this.
sh-3.1# fdisk /dev/ida/c0d0 Command (m for help): o Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 35211. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-35211, default 1): ENTER Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-35211, default 35211): +150m Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (38-35211, default 38): ENTER Using default value 38 Last cylinder or +size or +sizeM or +sizeK (38-35211, default 35211): ENTER Using default value 35211 Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes): 83 Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 8e Changed system type of partition 2 to 8e (Linux LVM) Command (m for help): p Disk /dev/ida/c0d0: 147.1 GB, 147108741120 bytes 255 heads, 32 sectors/track, 35211 cylinders Units = cylinders of 8160 * 512 = 4177920 bytes Device Boot Start End Blocks Id System /dev/ida/c0d0p1 * 1 37 150944 83 Linux /dev/ida/c0d0p2 38 35211 143509920 8e Linux LVM Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. sh-3.1#
The next step is to create the LVM structures within the new partition. I will be using the name "Disks" as the VG name (which becomes part of the device name within the running system.) I will also be creating two LVs, called "0swap" and "0root", within the VG. These LVs will be the swap space and root filesystem for the "xen0" client.
sh-3.1# lvm lvm> pvcreate /dev/ida/c0d0p2 Physical volume "/dev/ida/c0d0p2" successfully created lvm> vgcreate Disks /dev/ida/c0d0p2 Volume group "Disks" successfully created lvm> lvcreate -L 512M -n 0swap Disks Logical volume "0swap" created lvm> lvcreate -L 15G -n 0root Disks Logical volume "0root" created lvm> exit sh-3.1#
At this point you can press ALT-F1 (or ALT-F7, if you were using the GUI installer) to pick up where we left off in the installer.
When the installer asks about which disks to use and how to partition them, make sure you choose the "Custom" option. This will bring up the installer's "Disk Druid" interface. Because we have created the LVM structures, you will see these items on Disk Druid's screen, in addition to the physical partitions you are used to seeing.
Device Start End Size Type Mount Point VG Disks 140146M VolGroup LV 0swap 512M foreign LV 0root 15360M foreign /dev/ida/c0d0 c0d0p1 1 37 103M ext3 c0d0p2 38 35211 140146M physical v New Edit Delete RAID OK Back
The first step is to move down to "/dev/ida/c0d0p1" and press F3 (or "Edit".) It will pop up another window like this:
Mount Point:<Not Applicable>_____ File System Type: Linux native Size (MB): 101 File System Option:Leave unchanged OK File System Options Cancel
Click the "File System Options" button.
Please choose how you would like to prepare the file system on this partition. ( ) Leave unchanged (preserve data) (*) Format as: ext2 ext3 # OK Cancel
Choose "Format as:" (i.e. press DOWN so the cursor is on the space next to "Format as:", then press SPACE) and then make sure the filesystem type is set to "ext3" (i.e. press TAB to move into the list of filesystem types and use the up and down arrows until "ext3" is selected.) Press OK to return to the first dialog, and enter "/boot" as the mount point.
Mount Point:/boot_______________ File System Type: ext3 Size (MB): 101 File System Option:Format as ext3 OK File System Options Cancel
Next we move up to the "LV 0swap" line and press F3 (or "Edit".) It will pop up the same window as before- select "File System Options", and on that window choose "Format as:" and set the type to "swap" (use the up and down arrows, "swap" is on the list- the text interface isn't very good at showing a scroll bar to tell you that there are more than two items on the list.) Select OK, then OK on the first window.
Now move down to the "LV 0root" line and press F3 (or "Edit".) It will pop up the same window, again select "File System Options" and choose "Format as:" and "ext3". Press OK. Change the mount point to "/" and then press OK.
When it's done, you should see the following:
Device Start End Size Type Mount Point VG Disks 140146M VolGroup LV 0swap 512M swap LV 0root 15360M ext3 / /dev/ida/c0d0 c0d0p1 1 37 103M ext3 /boot c0d0p2 38 35211 140146M physical v New Edit Delete RAID OK Back
Now press F12 (or the "OK" button) to finalize the use of each partition. The installer will show you a summary of what you have chosen, which will look something like this:
The following pre-existing partitions have been selected to be formatted, destroying all data. Select 'Yes' to continue and format these partitions, or 'No' to go back and change these settings. /dev/ida/c0d0p1 ext3 /boot /dev/Disks/0root ext3 / /dev/Disks/0swap swap Yes No
When you click the YES button, the installer will continue like any other CentOS install.
When you reach the package selection portion of the installer, no matter what other packages you do or don't install, make sure that you include the "Virtualization" category. This is what installs the Xen software, and makes the installer set the machine up to boot directly into Xen.
My recommendation is that you UN-check every other option. The idea is to install as little as possible. Remember that the install you're doing right now will become the xen0 session, and will probably not be used for much more than maintenance on xenU sessions.
After the installation is done the machine will reboot. If you watch the messages scroll by on the screen, you will see a bunch of Xen-related messages, followed by the normal Linux kernel stuff, and the init processing.
Log in as root, and run the command "xm list" to make sure Xen is running correctly. You should see a list of the sessions which are running, and there will only be one entry on the list, like so:
# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 3552 2 r----- 115.3
Note that the "xm" command requires that the "xend" process is running in the session, and xend will only be able to start if it's running an a xen0 session, above a Xen hypervisor. If it isn't started, you may want to try and start it by hand (i.e. "service xend start".) If it won't start, try rebooting and making sure that you choose an option from the grub menu which specifies Xen. It may have created both Xen and non-Xen boot options for you. If that doesn't help, something may have gone wrong with the install. You may want to walk through the install again, or ask for help on a CentOS support web forum or mailing list.
I personally have no use for IPv6 at the current time. However, the CentOS installer configures the machine to speak IPv6 even if you tell it not to. I always disable IPv6 on my machines.
If you want/need to disable IPv6, do the following:
Edit the /etc/sysconfig/network file. If a "NETWORKING_IPV6=yes" line exists, change the "yes" to "no". Otherwise, add a "NETWORKING_IPV6=no" line to the end of the file.
Edit each of the /etc/sysconfig/network-scripts/ifcfg-* files. If they contain an "IPV6INIT=yes" line, change the "yes" to "no". Otherwise, add an "IPV6INIT=no" line to the end of the file.
Edit the /etc/modprobe.conf file. Add these two lines:
alias net-pf-10 off
alias ipv6 off
Remember that each session has its own networking settings. Disabling ipv6 on the xen0 session will not automatically disable it for the child sessions. You may need to do this on every child. And of course, reboot the session after making the change (or reboot the machine, if the session was the xen0 session.)
This section was added after the machine had been "live", with child sessions on it, for about a week.
One issue I ran into is that when the machine boots up, and the xendomains init script starts up the child sessions, by default it only waits five seconds between starting one and starting the next one, which means that the CPU becomes overloaded by the children trying to run their own startup scripts all at the same time. (Remember that when a machine starts up, virtual or physical, the CPU usage is almost 100% while it's running the startup scripts.)
I found that changing this delay from 5 seconds to 15 seconds made the machine start up more smoothly, and made each child start up more smoothly as well, becuase other children weren't competing with each other for the "100% CPU" that they needed in order to run their startup scripts.
To change the delay, edit "/etc/sysconfig/xendomains".
Find this line (line 32 on my system)...
XENDOMAINS_CREATE_USLEEP=5000000
Change the delay...
XENDOMAINS_CREATE_USLEEP=15000000
You may or may not need to change this on your own system, or you may need a delay other than fifteen seconds. I basically guessed, and fifteen seconds seems to work for my hardware and five child sessions.
Obviously you'll need to reboot the machine after making this change.
This section was added after the machine had been "live", with child sessions on it, for about a week.
Another issue that I ran into is that the CentOS installer creates your "xen0" session without explicitly setting a memory limit, which means the xen0 session starts up using the entire memory space of the machine, and then allows the hypervisor to take chunks away as new child sessions are started.
This can be a good thing on a test machine, but for a real production server, you don't really want to have the xen0 session being resized that often. You will run into issues when the machine reboots, especially if you have more than one or two child sessions which automatically start with the machine.
On my own server, I was seeing this message, over and over again:
xen_net: Memory squeeze in netback driver.
I solved this by pre-setting the size of the xen0 session's memory when the machine boots. This is done by editing the /boot/grub/grub.conf file, finding the kernel line which loads the hypervisor, and adding one directive to it, like so:
title CentOS (2.6.18-8.1.15.el5xen)
root (hd0,0)
kernel /xen.gz-2.6.18-8.1.15.el5 dom0_mem=512M
module /vmlinuz-2.6.18-8.1.15.el5xen ro root=/dev/Disks/0root
module /initrd-2.6.18-8.1.15.el5xen.img
After making these changes, reboot the machine... and after any kernel upgrades on the xen0 session, make sure this change was copied to the new block before rebooting into the updated kernel.