Recent Adventures in oVirt and Gluster

At the end of last week, I spied an exciting tweet about oVirt:

libgfapi-ready

Not long after I started using oVirt and Gluster together, the projects started talking about a way to improve Gluster performance by enabling virtualization hosts to access Gluster volumes directly, using Gluster’s libgfapi, rather than through a FUSE-mounted location on the virtualization host. There was a little bit of fit and finish work to be done, and then we’d all be basking in the glow of ~30% better Gluster storage performance.

That was about four years ago. There ended up being kind of a lot of different little things that needed fixing to make this feature work in oVirt. You can follow many of the twists and turns in bugzilla.

All along, I was eagerly awaiting the feature both as a cool new oVirt+Gluster development and as a welcome option for speeding up my own lab. Disk has always been the weakest part of my hardware setup. My servers each have a single pair of 1TB drives in mirrored RAID, shared between Gluster and the OS, and my VM’s virtual drives had been stored in triplicate in replica 3 Gluster volumes. More recently, with the advent of Gluster arbiter bricks, I’ve been able to get the split-brain protection of replica 3 volumes with only two copies of the data, and that sped things up a bit, but did nothing to dampen my appetite for libgfapi.

Since I need my oVirt setup to get things done, I usually don’t test RC versions of new oVirt components there, but I couldn’t wait any longer and took the plunge. I installed the RC2 updates on each of my virt hosts, and on my engine, I installed a slightly newer versionof the code, from the experimental repo, which contained a few last bits that hadn’t made RC2. Then, on my engine, I ran:

# engine-config -s LibgfApiSupported=true
# systemctl restart ovirt-engine

Any VMs that were already running before the upgrade continued running without libgfapi, and if I migrated them to another host, they’d turn up on that host still using the old access method. When I restarted my VMs, they returned using libgfapi. I could tell which was which by grepping through the qemu processes on a particular VM host.

# ps ax | grep qemu | grep 'file=gluster\|file=/rhev'

-drive file=/rhev/data-center/00000001-0001-0001-0001-00000000025e/616be2b6-71db-4f54-befd-be6a444775d7/images/3f7877e7-e532-44a0-8735-c7b2ca06de3b/48ee34fc-ae12-494c-892f-4229fe1fef9d

-drive file=gluster://10.0.20.1/data/616be2b6-71db-4f54-befd-be6a444775d7/images/6597f45a-51cd-4da5-b078-a2652baf78e4/cc3a575e-27b8-4176-b922-9466273153be

The qemu command lines are super long, so I cut them down just to include the line specifying the virtual drives. In the first example, the drive is being accessed through a FUSE mount, and the second, there’s a direct connection to the Gluster volume.

So, how was performance?

I tried a few different tests, starting with runningddon one of my VMs:

# dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync && rm test

I ran this a bunch of times on a VM in both storage configurations and the libgfapi configuration came out about 44% faster on average.

For a more “real world” test, I figured I’d measure the time it takes to complete a common task of mine: configuring a test Kubernetes cluster from three Fedora Atomic Host VMs using the upstream ansible scripts. I recorded and averaged the time it took to complete this task across multiple runs on VMs running in each storage configuration, and found that libgfapi was 11% faster.

zram madness

Not too bad, but like I said earlier, my oVirt setup can use all the storage speed help it can get. My servers don’t have a lot of disk but they do have quite a bit of RAM, 256GB apiece, so I’ve long wondered how I could use that RAM to wring more speed out of my setup. For a few months I’ve been experimenting with using Gluster volumes backed by RAM-disks, using zram devices.

This actually works pretty well, and I was seeing speeds similar to what I get running on the SSD in my laptop. Of course, RAM-disks mean losing everything on the disk in the event of a reboot (expected or otherwise), but using replica 3 Gluster volumes, I could reboot one host at a time without losing everything else. Upon bringing back the rebooted host, I’d run a little script to recreate the zram device and the mount points, and then follow the Gluster instructions for replacing a failed brick.

# cat fast.sh
ZRAMSIZE=$((1024 * 1024 * 1024 * 50))
modprobe zram
echo ${ZRAMSIZE} > /sys/class/block/zram0/disksize
mkfs -t xfs /dev/zram0
mkdir -p /gluster-bricks/fast
mount /dev/zram0 /gluster-bricks/fast
mkdir /gluster-bricks/fast/brick

However, if all of my machines went down at once, due to a power failure in the lab or something like that, replication wouldn’t help me. I wondered if I could still get a significant boost out of a mixture of zram and regular disk backed volumes, with each of my servers hosting one zram-backed brick, one regular disk-backed brick, and one regular disk-backed arbiter brick, all combined into one distributed-replicated Gluster volume.

brick-house

I ran my same ansible-kubernetes setup tests with the VM drives hosted from my “fast” Gluster domain, and the tests run 32% faster than with the my regular disk-backed (and now libgfapi-enabled) “data” storage domain. Pretty nice, and, in this sort of setup, a power loss would mean that each of four replica groups would be missing one brick, with a remaining data brick and an arbiter brick still around to maintain the data and allow me to repair things.

I want to experiment a bit further with automated tiering in Gluster, where I’d connect a RAM-disk boosted volume like this to the volume for my main data domain, and frequently-accessed files would automatically migrate to the faster storage. As it is now, my fast domain has to be relatively small, so I have to budget my use of it.

oVirt 3.1, Glusterized

One of the cooler new features in oVirt 3.1 is the platform’s support for creating and managing Gluster volumes. oVirt’s web admin console now includes a graphical tool for configuring these volumes, and vdsm, the service for responsible for controlling oVirt’s virtualization nodes, has a new sibling, vdsm-gluster, for handling the back end work.

Gluster and oVirt make a good team — the scale out, open source storage project provides a nice way of weaving the local storage on individual compute nodes into shared storage resources.

To demonstrate the basics of using oVirt’s new Gluster functionality, I’m going to take the all-in-one engine/node oVirt rig that I stepped through recently and convert it from an all-on-one node with local storage, to a multi-node ready configuration with shared storage provided by Gluster volumes that tap the local storage available on each of the nodes. (Thanks to Robert Middleswarth, whose blog posts on oVirt and Gluster I relied on while learning about the combo.)

The all-in-one installer leaves you with a single machine that hosts both the oVirt management server, aka ovirt-engine, and a virtualization node. For storage, the all-in-one setup uses a local directory for the data domain, and an NFS share on the single machine to host an iso domain, where OS install images are stored.

We’ll start the all-in-one to multi-node conversion by putting our local virtualization host, local_host, into maintenance mode by clicking the Hosts tab in the web admin console, clicking the local_host entry, and choosing “Maintenance” from the Hosts navigation bar.

Once local_host is in maintenance mode, we click edit, change to the Default data center and host cluster from the drop down menus in the dialog box, and then hit OK to save the change.

This is assuming that you stuck with NFS as the default storage type while running through the engine-setup script. If not, head over to the Data Centers tab and edit the Default data center to set “NFS” as its type. Next, head to the Clusters tab, edit your Default cluster, fill the check box next to “Enable Gluster Service,” and hit OK to save your changes. Then, go back to the Hosts tab, highlight your host, and click Activate to bring it back from maintenance mode.

Now head to a terminal window on your engine machine. Fedora 17, the OS I’m using for this walkthrough, includes version 3.2 of Gluster. The oVirt/Gluster integration requires Gluster 3.3, so we need to configure a separate repository to get the newer packages:

# cd /etc/yum.repos.d/
# wget http://repos.fedorapeople.org/repos/kkeithle/glusterfs/fedora-glusterfs.repo

Next, install the vdsm-gluster package, restart the vdsm service, and start up the gluster service:

# yum install vdsm-gluster
# service vdsmd restart
# service glusterd start

The all-in-one installer configures an NFS share to host oVirt’s iso domain. We’re going to be exposing our Gluster volume via NFS, and since the kernel NFS server and Gluster’s NFS server don’t play well nicely together, we have to disable the former server.

# systemctl stop nfs-server.service && systemctl disable nfs-server.service

Through much trial and error, I found that it was also necessary to restart the wdmd service:

# systemctl restart wdmd.service

In the move from v3.0 to v3.1, oVirt dropped its NFSv3-only limitation, but that requirement remains for Gluster, so we have to edit /etc/nfsmount.conf and ensure that Defaultvers=3, Nfsvers=3, and Defaultproto=tcp.

Next, edit /etc/sysconfig/iptables to add the firewall rules that Gluster requires. You can paste the rules in just before the reject lines in your config.

# glusterfs
-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38467 -j ACCEPT

Then restart iptables:

# service iptables restart

Next, decide where you want to store your gluster volumes — I store mine under /data — and create this directory if need be:

# mkdir /data

Now, head back to the oVirt web admin console, visit the Volumes tab, and click Create Volume. Give your new volume a name, and choose a volume type from the drop down menu. For our first volume, let’s choose Distribute, and then click the Add Bricks button. Add a single brick to the new volume by typing the path you desire into the the Brick Directory field, clicking Add, and then OK to save the changes.

Make sure that the box next to NFS is checked under Access Protocols, and then click OK. You should see your new volume listed — highlight it and click Start to start it up. Follow the same steps to create a second volume, which we’ll use for a new ISO domain.

For now, the Gluster volume manager neglects to set brick directory permissions correctly, so after adding bricks on a machine, you have to return to the terminal and run chown -R 36.36 /data (assuming /data is where you are storing your volume bricks) to enable oVirt to write to the volumes.

Once you’ve set your permissions, return to the Storage tab of the web admin console to add data and iso domains at the volumes we’ve created. Click New Domain, choose Default data center from the data center drop down, and Data / NFS from the storage type drop down. Fill the export path field with your engine’s host name and the volume name from the Gluster volume you created for the data domain. For instance: “demo1.localdomain:/data”

Wait for data domain to become active, and repeat the above process for the iso domain. For more information on setting up storage domains in oVirt 3.1, see the quick start guide.

Once the iso domain comes up, BAM, you’re Glusterized. Now, compared to the default all-in-one install, things aren’t too different yet — you have one machine with everything packed into it. The difference is that your oVirt rig is ready to take on new nodes, which will be able to access the NFS-exposed data and iso domains, as well as contribute some of their own local storage into the pool.

To check this out, you’ll need a second test machine, with Fedora 17 installed (though you can recreate all of this on CentOS or another Enterprise Linux starting with the packages here). Take your F17 host (I start with a minimal install), install the oVirt release package, download the same fedora-glusterfs.repo we used above, and make sure your new host is accessible on the network from your engine machine, and vice versa. Also, the bug preventing F17 machines running a 3.5 or higher kernel from attaching to NFS domains isn’t fixed yet, so make sure you’re running a 3.3 or 3.4 version of the kernel.

Head over to the Hosts tab on your web admin console, click New, supply the requested information, and click OK. Your engine will reach out to your new F17 machine, and whip it into a new virtualization host. (For more info on adding hosts, again, see the quick start guide.)

Your new host will require most of the same Glusterizing setup steps that you applied to your engine server: make sure that vdsm-gluster is installed, edit /etc/nfsmount.conf, add the gluster-specific iptables rules and restart iptables, create and chown 36.36 your data directory.

The new host should see your Gluster-backed storage domains, and you should be able to run VMs on both hosts and migrate them back and forth. To take the next step and press local storage on your new node into service, the steps are pretty similar to those we used to create our first Gluster volumes.

First, though, we have to run the command “gluster peer probe NEW_HOST_HOSTNAME” from the engine server to get the engine and it’s new buddy hooked up Glusterwise (this another of the wrinkles I hope to see ironed out soon, taken care automatically in the background).

We can create a new Gluster volume, data1, of the type Replicate. This volume type requires at least two bricks, and we’ll create one in the /data directory of our engine, and one in the /data directory of our node. This works just the same as with the first Gluster volume we set up, just make sure that when adding bricks, you select the correct server in the drop down menu:

Just as before, we have to return to the command line to chown -R 36.36 /data on both of our machines to set the permissions correctly, and start the volumes we’ve created.

On my test setup, I created a second data domain, named data1, stored on the replicated Gluster domain, with the storage path set to localhost:/data1, on the rationale that VM images stored on the data1 domain would stay in sync across the pair of hosts, enabling either of my hosts to tap local storage for running a particular VM image. But I’m a newcomer to Gluster, so consult the documentation for more clueful Gluster guidance.

Up and Running with oVirt, 3.1 Edition

Update: I’ve written an updated version of this guide for oVirt 3.2.

Last February or so, I wrote a post about getting up and running with oVirt, the open source virtualization management project, on a single test machine. Various things have changed since then, such as a shiny new oVirt 3.1 release, so I’m going to update the process in this post.

What you need:

A test machine, ideally an x86_64 system with multiple cores, hardware virtualization extensions and plenty of RAM (like 4GB or more). The default OS for oVirt 3.1 is Fedora 17, and that’s what I’ll be writing about here. Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from.

UPDATE: For my Fedora oVirt installs, I’ve been using a minimal install of Fedora, which is an option if you start from the DVD or network install images. I interact with my minimal installs via ssh. If you’re using a minimal install with ssh, my instructions work just fine. However, if you start from the default Fedora LiveCD media, you’ll need to take a couple of extra steps. You must disable NetworkManager: (sudo systemctl stop NetworkManager.service && sudo systemctl disable NetworkManager.service), you must enable sshd: (sudo systemctl start sshd && sudo systemctl enable sshd), and then reboot for good measure before proceeding with the rest of the steps.

 (BUG NOTE: With the latest Fedora 17 kernel, I’m hitting https://bugzilla.redhat.com/show_bug.cgi?id=845660, preventing nfs domains from attaching, so for now, you’ll want to run a previous fedora kernel. (BUG NOTE NOTE: This bug, at long last, is just about squashed. Stay tuned.))

 

The package vdsm-4.10.0-10 squashed the above bug dead. Make sure you’re up to date w/ it to avoid issues w/ post 3.5 Fedora kernels.

(A NEW BUG NOTE: There’s a new, 3.2 version of ovirt-engine-sdk in the Fedora 17 update repo. The oVirt 3.1 packages that depend on the sdk don’t call specifically for version 3.1, but they appear not to work with 3.2. For now, you must downgrade to the 3.1 version of the sdk in order for the all-in-one installer and other features to work properly: “yum downgrade ovirt-engine-sdk” I’ve filed a bug, here: https://bugzilla.redhat.com/show_bug.cgi?id=869457 — you can cc yourself on the bug for progress updates.)

 

All-in-One Install:

oVirt 3.1 now includes an installer plugin for setting up the sort of single machine installation I wrote about previously. It’s good for testing out oVirt, and if you want to expand from your single machine install to cover additional nodes and storage, you can do that. Read on for the steps involved, and/or watch this handy screencast I made of the process:

[youtube:http://www.youtube.com/watch?v=Aq3ctFhBIhk%5D

1. Install the ovirt-release package on your Fedora 17 machine: “yum install http://www.ovirt.org/releases/ovirt-release-fedora.noarch.rpm”

2. Install the ovirt-engine all-in-one package: “yum install ovirt-engine-setup-plugin-allinone”

2a. As pointed out by oVirt community member Adrián, in the comments below, you can ensure that the install script allows enough time for the host to add itself by editing “/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py” to make the “waitForHostUp timeout larger, like so:

def waitForHostUp():
utils.retry(isHostUp, tries=40, timeout=300, sleep=5)

3. Run engine-setup: “engine-setup” and answer all the questions.

I’ve found that the all-in-one installer sometimes times out during the install process. If the script times out during the final “AIO: Adding Local host (This may take several minutes)” step, you can proceed to the web admin console to complete the process. If it times out at an earlier point, like waiting for the jboss-as server to start, you should run “engine-cleanup” and then re-run “engine-setup”.

4. When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, take a look at the “Storage” and “Hosts” tabs. If the all-in-one process completed, you should see a host named “local_host” with a status of “Up” under Hosts, and you should see a storage domain named “local_host-Local” under “Storage.”

If your local_host is still installing, you’ll need to wait for it to finish before proceeding. You should be able to view its progress from the events panel at the bottom of the console interface. Once the host is finished installing, click on your “local_host” and hit the “Maintenance” link to put it into maintenance mode. Once your host is in maintenance mode, you’ll be able to click on the “Configure Local Storage” link, where you enter the same local storage path you entered into the engine-setup script, and then hit “OK.”

5. Once the configure local storage process is complete (whether this was taken care of during engine-setup, or if you had to do it manually in step 4) click on the storage tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to, uh, activate it.

6. Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. For your viewing pleasure, here’s another screencast, about creating VMs on oVirt:

[youtube:http://www.youtube.com/watch?v=C4gayV6dYK4%5D

Beyond All in One (or skipping it all together):

Installing: A “regular” multi-machine install of oVirt works in pretty much the same way, except that in step two, you simply install “yum install ovirt-engine” and during the “engine-setup” process, you won’t be asked about installing VDSM or a local data domain on your engine. I typically skip creating an iso domain on my engine, as I use a separate NAS device for my iso domain needs.

The local data center, cluster and storage domain created as part of the all-in-one installation option are only accessible to the virtualization host installed locally on the engine. Shifting to a multi-machine setup involves moving that local host to the Default datacenter and cluster, which starts with putting the host into maintenance mode, clicking edit, and switching the Data Center and Cluster values to “Default” (or to another, non-local set of data center and cluster values).

Hosts: Once the setup script is finished, you can head over to the web admin console to add hosts and storage domains. oVirt hosts can be either regular Fedora 17 boxes or machines installed with oVirt Node. In either case, you add one of these machines as an oVirt host by clicking “New” under the “Hosts” tab in the web admin console, and providing a name, IP address (or host name) and root password for your host-to-be, and clicking OK. A dialog will complain about configuring power management, but it’s not strictly required.

When adding an oVirt Node-based system as a host, you can also provide the ovirt-engine address and admin password in the admin interface of the node, which will add the node to your ovirt-engine server, pending approval through the web admin console.

Storage: A multi-machine setup requires a shared storage domain, such as one backed by NFS or iSCSI. Setting up an NFS storage domain involves clicking “New Domain” on the “Storage” tab, giving the new data domain a name and configuring its export path. Setting up an iSCSI domain is similar, but involves entering the IP address of your iSCSI target, discovering available LUNs, and selecting one to use.

When Things Go Wrong:

A few things to do/check when things go wrong.

1. Put selinux into permissive mode: “setenforce 0” I run my systems with selinux enabled, but there are sometimes selinux-related bugs. Putting your test system into permissive mode will get you past the errors.

2. Check the logs:

  • ovirt-engine install log lives at /var/log/ovirt-engine/engine-setup*.log
  • jboss app server logs live at /var/log/ovirt-engine/boot.log and /var/log/ovirt-engine/server.log
  • ovirt-engine logs live at /var/log/ovirt-engine/engine.log — you can tail -f /var/log/ovirt-engine/engine.log to watch what the engine is doing
  • vdsm logs live (on each virt host) at /var/log/vdsm/vdsm.log — you can watch these to see what’s going on with individual virt hosts

3. Visit us at #ovirt on OFTC. My handle there is jbrooks. If you don’t get an answer there, send a message to users@ovirt.org.

Faking It:

I mentioned right at the top that if you want to test oVirt virtualization, you need a machine with hardware virtualization extensions. The oVirt management engine can live happily within a VM, but for hosting VMs, you need those extensions.

While most physical machines these days come with those extensions, virtual machines don’t have them. There’s such a thing as nested KVM virtualization, but it’s tricky to set up and pretty unstable when you can set it up.

There is a way to test out oVirt without hardware virtualization extensions, but the catch is that you can’t actually run any VMs on one of these “fake” installs. Why bother? Well, there’s a lot to test and see in oVirt that falls short of running VMs–I made my whole installing oVirt hotwo video on a VM running inside of my real oVirt rig, for instance. You can get a feel for installing hosts and configuring storage, and managing Gluster volumes (a topic I haven’t covered here, but will, soon, in another post, till then see here for more info on oVirt/Gluster). For more on oVirt w/ Gluster, see here.

For the all-in-one setup instructions above, right after step 2:

  • install the “fake qemu” package (yum install vdsm-hook-faqemu)
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = false to fake_kvm_support = true
  • replace the contents of the of /usr/share/vdsm-bootstrap/vds_bootstrap.py (it’ll be there post step 2) with the file at http://gerrit.ovirt.org/cat/5611%2C3%2Cvds_bootstrap/vds_bootstrap.py%5E0
  • continue to step 3

That vds_bootstrap.py step shouldn’t be required, and I’m going to file a bug about it as soon as I finish this post. For more information on this topic, see: http://wiki.ovirt.org/wiki/Vdsm_Developers#Fake_KVM_Support.

If you’re trying to configure a separate fake host, for now, you’ll need to do it on a regular vdsm (not oVirt Node) host, though this should soon change. But, for your regular host, before trying to add the host through the oVirt web admin console:

  • run “yum install vdsm-hook-faqemu vdsm”
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = false to fake_kvm_support = true

Either way, you’ll need that modded /usr/share/vdsm-bootstrap/vds_bootstrap.py file on your engine, and you only have to change this file once, until/unless a future package update restores the faqemu-ignorant file.

How to Get Up and Running with oVirt

NOTE: The most recent version of this howto, for oVirt 4.1, lives HERE.

As a fan both of x86 virtualization and of open source software, I long wondered when the “Linux of virtualization” would emerge. Maybe I should say instead, the GNU/Linux of virtualization, because I’m talking about more than just a kernel for virtualization — we’ve had those for a while now, in the forms of Xen and of KVM. Rather, I’ve been looking for the virtualization project that’ll do to VMware’s vSphere what Linux-based operating systems have done to proprietary OS incumbents: shake up the market, stoke innovation, and place the technology in many more people’s hands.

Now, I may just be biased — I work for one of the companies trying to give this technology away to those who want it (and sell it to those who’re looking for support) — but the time for that Linux of virtualization is has finally come. Last week, the oVirt Project shipped its first release since the source code for the project’s Java-based management server went public last November. After having toiled through building and configuring oVirt back in November, I’m happy to report that the process has gotten much much simpler. Plenty of work remains to be done, particularly around supporting multiple Linux distributions. However, if you have a reasonably beefy machine to test with, you can be up and running in no time. Here’s a step-by-step guide to installing a single server oVirt test rig:

Step one, get a machine with Intel VT or AMD-V hardware extensions, and at least 4GB of RAM. As with all virtualization, the more RAM you have, the better, but 4GB will do for a test rig.

Step two, grab a Fedora 16 x86_64 install disc and install Fedora.  Also, you’ll want to have a client system capable of accessing the spice-based console of your VM–for now, Fedora’s your best  there as well. (update: I did some spice-xpi packaging for Ubuntu 11:10 and openSUSE 12.1) You can access oVirt systems via VNC, as well, though that path is rougher around the edges right now.

(An aside: right now, oVirt is most closely aligned with Fedora, as the only current downstream distribution is Red Hat’s RHEV. However, getting oVirt into as many distros as possible is a priority for the project, so let’s hurry up and install this so we can get to work on Ubuntufication and openSUSEification and whatnot!)

For my Fedora 16 test machine, I went with the minimal install option, and got rid of the separate /home partition that the Fedora installer creates by default, leaving that space instead to the root partition. For networking, I stuck to dhcp.

After installing F16, start the network, set it to start in the future by default, and see what your IP address is:

service network start && chkconfig network on && ifconfig -a

From there, ssh into your machine, where it’s easier to cut and paste directions from the Web. Since I installed my system from the Fedora DVD, I yum update to install the ~79 updates that have been released since. (And remind myself for the 1000th time to look into creating a local Fedora repository)

Next, install wget (the minimal install doesn’t come with it) and grab the repository file for oVirt Stable:

yum install -y wget && wget http://www.ovirt.org/releases/stable/fedora/16/ovirt-engine.repo -P /etc/yum.repos.d/

(I’ve been setting up my test installs using the root user, if you’re logged in as a regular user, use sudo as needed)

Then, install ovirt-engine, the management server for oVirt:

yum install -y ovirt-engine

(on my minimal install, this step pulled in 100 packages)

Next, run the setup script for oVirt Engine, cleverly tucked away under the name:

engine-setup

The script asks a series of questions, and it’s safe to stick with the defaults. The script will ask for your machine’s fully-qualified domain name, and suggest its host name by default. If the name doesn’t resolve properly, the script will ask if you’re sure you want to proceed. It’s OK to proceed anyway — if you run into trouble you can work around it by modifying /etc/hosts, and for this single server config, well, your server knows where to find itself. Choose NFS as the default storage type, and let the script create an NFS iso share for you. I chose the path /mnt/iso and name ‘iso’

ovirt-setup

Type yes to proceed. When the script finishes, it’ll tell you where to reach the ovirt web interface, at port 8080 or 8443 of your management server. Before we head over there, though, let’s do a bit more storage configuration.

In keeping with the all-in-one theme of this walkthrough, we’re going to create three nfs shares on our management server: one for hosting the iso images from which we’ll install VMs; one for hosting our VMs’ hard disk images; and one for hosting a location to which we can export VMs images we may want to move between data domains. If you let the engine-setup script create an nfs share for you, you’ll see this line in /etc/exports:

/mnt/iso 0.0.0.0/0.0.0.0(rw) #rhev installer

Create two more like it:

/mnt/data 0.0.0.0/0.0.0.0(rw)
/mnt/export 0.0.0.0/0.0.0.0(rw)

Then head over to /mnt to create the data and export directories:

cd /mnt && mkdir data export

All three of our storage folders need to be owned by user vdsm and group kvm. The iso folder that the engine-setup script created is already owned by vdsm:kvm — the data and export directories we created need to match that:

chown vdsm:kvm data export

Another NFS configuration bit here. oVirt wants to mount its NFS shares in v3, not v4. You can ensure this either by disabling nfs v4 on the server side or on the client side, as described in the oVirt wiki, here. I’ve been disabling NFSv4 on my ovirt-engine boxes by adding this line to /etc/sysconfig/nfs, and then restarting the service:

NFS4_SUPPORT="no"

systemctl restart nfs-server.service

Now we have a oVirt Engine management server and three NFS shares, and we’re ready to add a host to handle the compute. Since this is a single-box install, we’re going to configure our management server as a virtualization host. This step is based on the wiki page at http://www.ovirt.org/wiki/Installing_VDSM_from_rpm. First, we install bridge-utils and create a network bridge:

yum install -y bridge-utils

vi /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt

This is the contents of my bridge config file:

DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=dhcp
NM_CONTROLLED="no"

Then, to the config file for my network adapter, I add the line:

BRIDGE=ovirtmgmt

And then, restart the network:

service network restart

(Again, this is on my minimal install. If you’re using a Fedora machine with NetworkManager enabled, you should also disable NM. Check the wiki for more info.)

Now let’s hit the oVirt Engine administrator portal, at :8080 or 8443. If your server’s host name doesn’t resolve properly, you can add an entry in the /etc/hosts file of your client to route you in the right direction. Here’s what you should see:

hello_engine

Click one of the administrator portal links, log in with the user name “admin” and the password you gave the engine-setup script. Once you’re logged in, click the “Hosts” tab, then click “New” to add a host. Give your host a name, and enter it’s address or host name, and the machine’s root password:

add_host

Once you click OK, a dialog box will inform you that you’ve not configured power management–that’s all right, just click through that.

Now, at the bottom of the screen, you can click on the up/down arrows next to “Events” to expand the events dialog and watch your management server configure itself as a host. You’ll see your server ssh in to itself, and run a bootstrap script that will install everything it needs. The last step in the script reboots the script, so if you click back to check on your progress and see “Error: A Request to the Server failed with the following Status Code: 0” that’s probably a good sign. :) (If something goes wrong during the process, you’ll see in this events area. If the event tells you to look at the log, start with the engine log at /var/logs/ovirt-engine/engine.log. Hopefully, the process will just crank to completion without incident.)

Once your server comes back from it’s reboot, you ought to be able to log in to the admin portal, click on “Hosts,” and see a happy green arrow indicating that your host is Up. Once your host is up, it’s time to hook up our storage. Click the “Storage” tab, then “New Domain.”

storage1

Give your new data domain a name (I go with “data”), choose your host from the drop-down box, and then enter the address and mount point of your NFS data share in the “Export Path” field, and click OK. Your server should mount the share and, shortly, you should see another happy green Up arrow next to your data domain.

An oVirt data center needs an active data domain before you can attach or add iso or export domains, which is why the iso domain that the engine-setup script creates starts out unattached. With your data domain in the green, you can click on that iso share, and then, in the pane that appears below the domains list, click the Data Center tab, then “Attach,” and choose your default data center to attach the iso domain to. Next, click the “Default” entry in that same pane, and “Activate.”

Adding the export domain works in just the same way as adding the data domain, just make sure that you choose the export option from the “Domain Function / Storage Type” drop down menu.

Now, let’s add an iso image from which to install a VM. We do this from the command line, using the tool, engine-iso-uploader. On my test systems, I’ve used wget to fetch an iso image (in this example, the Fedora net install image) from the Internet to my oVirt Engine machine. From the directory where I’ve downloaded the image, I issue the command:

engine-iso-uploader -i iso upload Fedora-16-x86_64-netinst.iso

The tool asks me for my admin password, the same one I use to log in to the web console, and starts uploading the image to my iso domain, which I’ve named “iso.”  (For more engine-iso-uploader guidance, see “man engine-iso-uploader”)

Once the upload is finished, I’m ready to create my VM. Click the virtual machines tab in the web admin, click new desktop (or server), give the machine a name, set the memory size, and adjust the cores, if you want. The OS list is limited right now to the RHEL and Windows options officially supported by RHEV, but I’ve installed Fedora, Ubuntu and Windows 8 without any trouble. For my F16 install, I chose RHEL 6.x x86_64 from the list:

newvm

After clicking OK in the new VM dialog, click on your new machine in the VM list, and in the secondary pane that appears below, give the VM a network interface clicking on Network Interfaces, then New, then OK:

net1 In the same way, give your VM a disk by clicking on Virtual Disks, New, enter a size, then OK:

disk1

We’re ready to install our VM. With your VM selected, click the “Run Once” button, attach your install CD, bump up cd-rom in the boot sequence, and click OK:

run_once

In order to access the console of our new VM, we’re going to need to install the Firefox extension for spice. From a Fedora 16 machine with Firefox installed, you can install the spice package with:

yum install -y spice-xpi

You may need to restart Firefox after installing the spice plugin, but once you’re up and running with it, you’ll be able to right-click on your VM and click “Console,” which will bring up the spice console for your machine. From here, install your OS normally. In the spice console, you can hit Shift-F11 to enter/exit full screen mode, and Shift-12 to release your pointer if the console has captured it.

The configuration changes you make in the “Run Once” dialog are supposed to last just once, but I’ve found that they persist until you actually shut down the machine–rebooting it once your install is complete isn’t enough.

I think that’s it — we have an all-in-one oVirt test box, complete with NFS storage and a guest machine. From here, you can add additional hosts, based on other Fedora hosts or on the project’s stripped-down oVirt Node image. You can point your additional hosts at the NFS shares we created in this runthrough, or you can add new storage domains. Consult the oVirt Installation guide for more information on installing and configuring your oVirt environments. That’s enough for this blog post, check back soon for more material on oVirt, and if you’re interested in getting involved with the project, you can find all the mailing list, issue tracker, source repository, and wiki information you need here. On IRC, I’m jbrooks, ping me in the #ovirt room on OFTC or write a comment below and I’ll be happy to help you get up and running or get pointed in the right direction.