Once you have a working machine which boots the Xen hypervisor and a CentOS 5.0 xen0 session, the next step is to create the first xenU client session.
All of the instructions on this page will need to be done within the xen0 client.
The first step is to create the LVs (logical volumes) which the client will use as their root and swap filesystems. This is done using the "lvcreate" command. You will need to know the name of the VG (volume group) in which you want to create the LVs. On my server (and on this web site) we will use the name "Disks". Note that the name IS case-sensitive; if you create it using an all-lowercase name, you will always need to access it using the all-lowercase name.
For this example, the swap LV will be 512MB and the root LV will be 5GB.
# lvcreate -L 512M -n client_swap Disks
# lvcreate -L 5G -n client_root Disks
Next we need to initialize the swap LV (so that Linux can use it as swap space) and create an ext3 filesystem within the root LV.
# mkswap /dev/Disks/client_swap
# mke2fs -j /dev/Disks/client_root
At this point the swap LV is ready to be used, but we need to install an operating system in the root LV.
We are going to be doing most of what the CentOS installer does when it sets up a new system- creating a few basic directories and files, installing a set of packages which make up the core system, and doing a certain amount of post-install configuration on what we've just installed.
In order to do this, we need to first mount the new root LV so that we can work with it. And in order to mount a filesystem, we need to first create the directory where it will appear within the existing filesystem.
The example here uses /mnt/work as the mount point. The rest of this page is written to use the same location. If you choose to use a different mount point, you will need to make the same adjustment several times as you follow along with this page.
# mkdir -m 755 /mnt/work
# mount /dev/Disks/client_root /mnt/work
There are a few, very basic, items which need to exist in a filesystem which will be the root of a filesystem, before most Linux commands will work correctly. We will be using a form of the "yum" command to install the bulk of the packages, and that yum command will be running in a "chroot" environment, so these basic items must exist within that chroot environment.
Note that the filenames on the mknod commands start with "dev/", NOT "/dev/". There is no "/" at the beginning of the names. This is because we are creating entries in the new directories within the client root LV, not in the directories of the running xen0 system.
# cd /mnt/work
# mkdir -m 755 dev etc proc var var/log
# mknod -m 600 dev/console c 5 1
# mknod -m 666 dev/null c 1 3
# mknod -m 666 dev/zero c 1 5
We also need to create an /etc/fstab file...
# cd /mnt/work # cat > etc/fstab <<EOF /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 swap swap defaults 0 0 none /dev/pts devpts gid=6,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 EOF chmod 644 etc/fstab
And we need a working /proc as well. Luckily we can mount a second instance of the one we're already using.
# mount -t proc none /mnt/work/proc
Next, the client's RPM database (the underlying database used by "yum") needs to know about the GPG key which is used to sign the official CentOS packages. This allows the client to verify that a package hasn't been changed since CentOS released it.
Even though you probably have a copy of this file on your local FTP server, I recommend you download it directly from the CentOS server, just to be 100% sure.
# rpm --root /mnt/work --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
Once everything is set up, the actual installation consists of a single yum command which installs the packages. This involves downloading the 140-150 "core" packages from whatever yum repository the xen0 session is configured to use. (If you are running a local repository, please make sure the xen0 session itself is configured to use it before continuing.)
Once you have verified where the packages will come from, install them using this command:
# yum --installroot=/mnt/work -y groupinstall Core
At this point, the bulk of the system is present, and just needs to be configured.
There is an issue with some of the system libraries which cause statically linked programs to spew thousands of "4gb seg fixup" warnings. This is fixed with the following commands:
# install -m 444 /etc/ld.so.conf.d/* /mnt/work/etc/ld.so.conf.d/
# chroot /mnt/work ldconfig
Note that if you upgrade the kernel on the xen0 session, the new kernel package will add or update a file in the /etc/ld.so.conf.d directory. This file will need to be copied to the same location within the child sessions.
Probably the most important thing is the kernel modules. The yum command above didn't install any modules in the new system, this is normally the job of the installer. In this situation, WE are the installer, so we need to do it ourselves.
# cp -a /lib/modules/`uname -r` /mnt/work/lib/modules/
Those are backticks around "uname -r", not single quotes. Backticks are on the key to the left of the number "1" on the keyboard.
# chroot /mnt/work depmod -a
Note that if you upgrade the kernel on the child sessions, you will need to copy the appropriate "/lib/modules/ directory into the same location within the child session before rebooting the child.
In addition, we need to generate an "initrd" (Initial RAM Disk) for use by the child sessions. This file is used as a "ramdisk" during the boot process. It contains the kernel modules needed in order to load the drivers for the hard drives and filesystems. This provides a way to make modules available to the kernel before the drivers for the hard drive and root filesystem have been loaded.
I normally store these "initrd" files within the /etc/xen directory on the xen0 system. Note that if you have other child sessions using the same kernel, you will have this file already.
These commands (the last one is all line, or you can type it as shown here, as multiple lines with backslashes) will generate the initrd file.
First find the full kernel version number...
# uname -r
Then plug it into the "mkinitrd" command...
# mkinitrd -v --preload xenblk --with=xennet \
--omit-scsi-modules --omit-raid-modules --omit-lvm-modules \
First we need to configure the new session to use the yum repository on the local machine. Assuming the xen0 session was configured using the /etc/yum.repos.d/jms1.repo file (as detailed on the previous page) we can do this:
# install -m 644 /etc/yum.repos.d/jms1.repo
# cd /mnt/work/etc/yum.repos.d
# mv CentOS-Base.repo CentOS-Base.repo.not
# mv CentOS-Media.repo CentOS-Media.repo.not
There are a few other basic packages I like to have installed on a new system, which are not part of the CentOS "Core" group. In particular, I use tcsh as my login shell on the xen0, which means it needs to be present within the chroot environment in order for the "chroot" command to work properly. Feel free to add other package namess to this list if you like.
# yum --installroot=/mnt/work -y install tcsh nano crontabs anacron vixie-cron lynx man
There is a security framework developed by the NSA called SELinux. It adds an entire system of rights, roles, and permissions above the Linux kernel which, if properly configured, can greatly enhance the security of a system. However, it is fairly new technology and most people, myself included, simply aren't comfortable with it yet.
SELinux is part of CentOS 5. It can be configured in one of three basic modes: "enforcing", where operations which violate SELinux's rules are blocked; "permissive", where operations which violate SELinux's rules are logged but still allowed to happen; and "disabled", which disables all of the security checks entirely.
When the packages above were installed, it configured the new system to use "enforcing" mode. In most cases you will want to change this, so that the new client won't be confused by SELinux blocking things.
Edit the file /mnt/work/etc/selinux/config and change the SELINUX=enforcing line to say either SELINUX=permissive or SELINUX=didabled.
If we were to start the new client as it stands, we wouldn't be able to access it at all, because it wouldn't know how to find the virtual network interfaces. We need to create an /etc/modprobe.conf file to tell it which driver to use.
While we're in there, we can also disable IPv6 support within the new client. You may or may not wish to do this; personally I turn off IPv6 support on my machines because I have no need for IPv6 on any of them.
Edit the /mnt/work/etc/modprobe.conf file, which will probably not exist yet. It should look like this:
alias eth0 xennet Set driver for eth0 interface
alias eth1 xennet Set driver for eth1 interface
alias net-pf-10 off Disable IPv6
alias ipv6 off Disable IPv6
We also need to configure the interfaces themselves.
To configure the eth0 interface, we need to create the file /mnt/work/etc/sysconfig/network-scripts/ifcfg-eth0 with the following contents:
IPV6INIT=no Use "yes" if you plan to use IPv6.
The same information is needed for the eth1 interface, except that we don't need a GATEWAY variable (since the default gateway is not accessed through the eth1 interface.) We need to create the file /mnt/work/etc/sysconfig/network-scripts/ifcfg-eth1 with the following contents:
IPV6INIT=no Use "yes" if you plan to use IPv6
There are a few networking settings which affect the entire system. These are set in the /mnt/work/etc/sysconfig/network file:
NETWORKING_IPV6=no Use "yes" if you plan to use IPv6
HOSTNAME=hostname.domain.xyz Use the real hostname for this virtual machine.
We also need to configure the DNS resolver, so the session will know how to convert DNS names into IP addresses. Create or edit the /mnt/work/etc/resolv.conf file with the following contents:
search domain.xyz OPTIONAL: domain name suffix to try when resolving
names with no domain.
nameserver x.x.x.x Use one "nameserver" line for each DNS server.
We should also create a simple /mnt/work/etc/hosts file which can be used to resolve a few local names, in case DNS isn't working. The file needs to look like the following, where "x.x.x.x" is the normal IP address assigned to the session, and "hostname.domain" is the DNS name assigned to the session.
127.0.0.1 localhost x.x.x.x hostname.domain.xyz hostname
Because this will be a virtual machine, it won't have a physical keyboard and display to use. The normal "/dev/ttyn" devices depend on having this hardware, which means they won't work for a virtual machine.
We can edit the /mnt/work/etc/inittab file and comment out the lines which would normally start "getty" processes to handle logins on the console devices. We also need to add a line to run a "getty" on the device which corresponds to the "console" channel, which allows the xen0 session to access what would normally be the physical console of each xenU child session, so that if the child's networking stops working, somebody can get into the child and fix it.
Edit the "/mnt/work/etc/inittab" file. At the end of the file you will see a series of lines like this:
# Run gettys in standard runlevels
Comment these lines out by putting a "#" at the beginning of each one, like so:
# Run gettys in standard runlevels
Below these lines, add this line, which will cause a login prompt to appear on the "console" channel accessible from the xen0 session.
xen:12345:respawn:/sbin/mingetty --noclear console
You may or may not wish to do this. CentOS (well, RedHat) has a program called "kudzu" which scans for hardware changes whenever a system boots, and makes changes to the system's configuration so that new devices work and "missing" devices aren't searched for. I don't think it makes much sense to run this for a virtual machine whose "hardware" should never change, so I always disable it.
# chroot /mnt/work chkconfig --level 12345 kudzu off
We need to set a root password before anybody will be able to log into the new child session. Before we can do this, we need to run the "pwconv" program to create the /etc/shadow file.
# chroot /mnt/work pwconv
# chroot /mnt/work passwd root
Changing password for user root.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
You may see warnings about the password you choose. Because you are changing the password while running as root, these are only warnings- they will not prevent you from using the password you have chosen. However, it's probably a good idea to take their warnings seriously, unless you know that the password will be changed by the client as soon as they log into the virtual machine.
There are a few other minor things I always do on the servers I build. One is to change the port number which sshd listens on (by changing the Port line in /etc/ssh/sshd_config), another is to install my ssh public key so that I am able to log into "root" on the child session (1) without using a password, and (2) if the client changes their root password and forgets what it is.
If there are other things you'd like to "pre-set" for the child session, they can be done at this point.
When you are finished with the new child session's image, you need to un-mount it. Xen will not be able to start the child if you have its disk device mounted, and the error message it gives you doesn't really explain what the problem is.
Also remember that you have a "/proc" mounted within the child as well, so that chroot operations will work properly. This needs to be un-mounted first.
# umount /mnt/work/proc
# umount /mnt/work
If you see an error about the filesystem being "busy", that means that there is at least one open handle within the filesystem. Usually this means there's a terminal session whose "current directory" is within that filesystem, and you need to run "cd" in all of your terminal windows to make sure you're out of it.
DO NOT go any further until you have successfully un-mounted the filesystem. You can type "mount" by itself to see a list of all mounted filesystems- you should not see "/mnt/work" listed at all.
The last part of the setup is to create a file within the /etc/xen directory which tells xm how to build and configure the actual child session. This is a sample of such a file:
name = "test"
vcpus = 1
memory = 256
disk = [ 'phy:/dev/Disks/test_root,xvda1,w' , 'phy:/dev/Disks/test_swap,xvda2,w' ]
vif = [ 'bridge=xenbr0' , 'bridge=xenbr1' ]
uuid = "00coffee-5d2d-421d-8407-facedeadbeef"
kernel = "/etc/xen/vmlinuz-2.6.18-8.1.15.el5xen"
ramdisk = "/etc/xen/initrd-2.6.18-8.1.15.el5xenU.img"
root = "/dev/xvda1 ro"
extra = "3"
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
The lines you see here can technically be in any order you like. Empty lines are ignored. The grouping you see here makes sense to me- it first describes the virtual hardware (number of CPUs, amount of memory, disks, ethernet interfaces, etc.) and a UUID "serial number".
Techncially, the UUID can be any 128-bit value you like, so long as the UUIDs of your sessions are all unique within the server- or if you plan to set up multiple servers and may need to migrate VMs between servers, unique across ALL of your servers.
The second group are the options you would normally set in a boot loader config, such as grub.conf or lilo.conf. This sets the kernel and initrd that the child will use, along with any extra parameters to be passed on the kernel command line.
The last group tells xend what to do if the session stops. The setings here will allow you to shut a session down with "xm shutdown", allow a client to "reboot" their session using a normal "shutdown -r" command, and will make xend automatically start the session back up if it happens to crash for some reason.
Once the config file is created, you can start it running with the "xm create name" command. If there are no errors in the config file and the necessary resources are available (i.e. the disks, enough RAM, etc.) then the session will start up. Note that the "name" you specify here should be the filename of the config file, either specified as a path to the file, or if the file exists in the /etc/xen directory, just the filename.
If you wish to "watch" the session boot up, use the "-c" flag. The command will look like "xm create -c name". You can also "attach" to the console of a running session using the "xm console name" command. (Running "xm create -c" actually does the "create" and then immediately does the "console" command.)
When you are finished with the child's console and wish to return to the xen0 session, press "CONTROL-]" (i.e. hold down "CONTROL" and press the "]" key.) Note that this simply detaches from the console- if you were logged into the child and did this without logging out, the next time you "xm console" to that child, you will still be logged in, right where you left off.
On the xen0 session, there is a SysVinit service called "xendomains" whose job it is to start any child sessions which may need to be started as soon as the machine finishes booting. It does this by looking in the /etc/xen/auto directory, and running "xm create" on any sessions it finds there.
I don't honestly remember if the CentOS installer sets this service up to run automatically when the machine boots, but you can ensure that it will by running this command:
# chkconfig --level 345 xendomains on
While you could put the childrens' config files in the /etc/xen/auto directory, the usual method is to put the config files in the /etc/xen directory (which allows you to run "xm create" commands without having to specify the full path on the command line) and then create symbolic links in the /etc/xen/auto directory which point to the actual config files you wish to have automatically started.
# cd /etc/xen
# nano client1
Create the file using whatever options you need.
# xm create -c client1
Make sure the client session works as expected.
# cd /etc/xen/auto
# ln -s ../client1 .
Note that the "xendomains" init script will start the clients in order by the name in which they appear in the /etc/xen/auto directory. This means that if you care about the order in which the clients start, you can control the order by giving the symbolic links names which are in the order you want.
For example, if you have children called "moe", "larry", and "curly", and you want them to start up in that order, you might do something like this:
# cd /etc/xen/auto
# ln -s ../moe 1
# ln -s ../larry 2
# ln -s ../curly 3
The "xendomains" script will run them in the order "1", "2", "3", and the sessions will start up with whatever names are specified in the "name=" line within the config file.