Up and Running with oVirt, 3.2 Edition

I’ve written an updated version of this howto for oVirt 3.3 at the Red Hat Community blog.

The latest version of the open source virtualization platform, oVirt, has arrived, which means it’s time for the third edition of my “running oVirt on a single machine” blog post. I’m delighted to report that this ought to be the shortest (and least-updated, I hope) post of the three so far.

When I wrote my first “Up and Running” post last year, getting oVirt running on a single machine was more of a hack than a supported configuration. Wrangling large groups of virtualization hosts is oVirt’s reason for being. oVirt is designed to run with its manager component, its virtualization hosts, and its shared storage all running on separate pieces of hardware. That’s how you’d want it set up for production, but a project that requires a bunch of hardware just for kicking the tires is going to find its tires un-kicked.

Fortunately, this changed in August’s oVirt 3.1 release, which shipped with an All-in-One installer plugin, but, as a glance at the volume of strikethrough text and UPDATE notices in my post for that release, there were more than a few bumps in the 3.1 road.

In oVirt 3.2, the process has gotten much smoother, and should be as simple as setting up the oVirt repo, installing the right package, and running the install script. Also, there’s now a LiveCD image available that you can burn onto a USB stick, boot a suitable system from, and give oVirt a try without installing anything. The downsides of the LiveCD are its size (2.1GB) and the fact that it doesn’t persist. But, that second bit is one of its virtues, as well. The All in One setup I describe below is one that you can keep around for a while, if that’s what you’re after.

Without further ado, here’s how to get up and running with oVirt on a single machine:

HARDWARE REQUIREMENTS: You need a machine with x86-64 processors with hardware virtualization extensions. This bit is non-negotiable–the KVM hypervisor won’t work without them. Your machine should have at least 4GB of RAM. Virtualization is a RAM-hungry affair, so the more memory, the better. Keep in mind that any VMs you run will need RAM of their own.

It’s possible to run an oVirt in a virtual machine–I’ve taken to testing oVirt on oVirt itself most of the time–but your virtualization host has to be set up for nested KVM for this to work. I’ve written a bit about running oVirt in a VM here.

SOFTWARE REQUIREMENTS: oVirt is developed on Fedora, and any given oVirt release tends to track the most recent Fedora release. For oVirt 3.2, this means Fedora 18. I run oVirt on minimal Fedora configurations, installed from the DVD or the netboot images. With oVirt 3.1, a lot of people ran into trouble installing oVirt on the default LiveCD Fedora media, largely due to conflicts with NetworkManager. When I teseted 3.2 with the With 3.2, the installer script disabled NM on its own, but I had to manually enable sshd (sudo service sshd start && sudo chkconfig sshd on).

A lot of oVirt community members run the project on CentOS or Scientific Linux using packages built by Andrey Gordeev, and official packages for these “el6” distributions are in the works from the oVirt project proper, and should be available soon for oVirt 3.2. I’ve run oVirt on CentOS in the past, but right now I’m using Fedora 18 for all of my oVirt machines, in order to get access to new features like the nested KVM I mentioned earlier.

NETWORK REQUIREMENTS: Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from. If you take the hosts file editing route, the installer script will complain about the hostname–you can safely forge ahead.

CONFIGURE THE REPO: Somewhat confusingly, oVirt 3.1 is already in the Fedora 18 repositories, but due to some packaging issues I’m not fully up-to-speed on, that version of oVirt is missing its web admin console. In any case, we’re installing the latest, 3.2 version of oVirt, and for that we must configure our Fedora 18 system to use the oVirt project’s yum repository.

sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm

SILENCING SELINUX (OPTIONAL): I typically run my systems with SELinux in enforcing mode, but it’s a common source of oVirt issues. Right now, there’s definitely one (now fixed), and maybe two SELinux-related bugs affecting oVirt 3.2. So…

sudo setenforce 0

To make this setting persist across reboots, edit the ‘SELINUX=’ line in  /etc/selinux/config to equal ‘permissive’.

INSTALL THE ALL IN ONE PLUGIN: The package below will pull in everything we need to run oVirt Engine (the management server) as well as turn this management server into a virtualization host.

sudo yum install ovirt-engine-setup-plugin-allinone

RUN THE SETUP SCRIPT: Run the script below and answer all the questions. In almost every case, you can stick to the default answers. Since we’re doing an All in One install, I’ve tacked the relevant argument onto the command below. You can run “engine-setup -h” to check out all available arguments.

One of the questions the installer will ask deals with whether and which system firewall to configure. Fedora 18 now defaults to Firewalld rather than the more familiar iptables. In the handful of tests I’ve done with the 3.2 release code, I’ve had both success and failure configuring Firewalld through the installer. On one machine, throwing SELinux into permissive mode allowed the Firewalld config process to complete, and on another, that workaround didn’t work.

If you choose the iptables route, make sure to disable Firewalld and enable iptables before you run the install script (sudo service firewalld stop && sudo chkconfig firewalld off && sudo service iptables start && sudo chkconfig iptables on).

sudo engine-setup --config-allinone=yes

TO THE ADMIN CONSOLE: When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, click the “Storage” tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to activate it.

Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. I made the “creating VMs” screencast below for oVirt 3.1, but the process hasn’t changed significantly for 3.2:

[youtube:http://www.youtube.com/watch?v=C4gayV6dYK4&HTML5=1%5D

oVirt on oVirt: Nested KVM Fu

I’m a big fan of virtualization — the ability to take a server and slice it up into a bunch of virtual machines makes life trying out and writing about software much, much easier than it’d be in a one instance per server world.

Things get tricky, however, when the software you want to try out is itself intended for hosting virtual machines. These days, all of the virtualization work I do centers around the KVM hypervisor, which relies on hardware extensions to do its thing.

Over the past year or so, I’ve dabbled in Nested Virtualization with KVM, in which the KVM hypervisor passes its hardware-assisted prowess on to guest instances to enable those guest to host VMs of their own. When I first dabbled in this, ten or so months ago, my nested virtualization only sort-of worked — my VMs proved unstable, and I shelved further investigation for a while.

Recently, though, nested KVM has been working pretty well for me, both on my notebook and on some of the much larger machines in our lab. In fact, with the help of a new feature  slated for oVirt 3.2, I’ve taken to testing whole oVirt installs, complete with live migration between hosts, all within a single host oVirt machine. Pretty sweet, since oVirt forms both my main testing platform and one of the primary projects I look to test.

All my tests with nested KVM have been with Intel hardware, because that’s what I have in my labs, but it’s my understanding that nested KVM works with AMD processors as well, and that the feature is actually more mature on that gear.

To join in on the nested fun, you must first check to see if nested KVM support is enabled on your machine by running:

cat /sys/module/kvm_intel/parameters/nested

If the answer is “N,” you can enable it by running:

echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf

After adding that kvm-intel.conf file, reboot your machine, after which “cat /sys/module/kvm_intel/parameters/nested” should return “Y.”

I’ve used nested KVM with virt-manager, the libvirt front-end that ships with most Linux distributions, including my own distro of choice, Fedora. With virt-manager, I configure the VM I want to use as a hypervisor-within-a-hypervisor by clicking on the “Processor” item in the VM details view, and clicking the “Copy host configuration” button to ensure that my guest instance boots with the same set of CPU features offered by my host processor. For good measure, I expand the “CPU Features” menu list and ensure that the feature “vmx” is set to “require.”

virt-manager-nested

Not too taxing, but it turns out that with oVirt, enabling nested virtualization in guests is even easier, thanks to VDSM hooks. VDSM hooks are scripts executed on the host when key events occur. The version of VDSM that will accompany oVirt 3.2 includes a nestedvt hook that does exactly what I described above — it runs a check for nested KVM support, and if that support is found, it adds the require vmx element to your VM’s definition.

I’ve tested this both with oVirt 3.2 alpha, and with the current, oVirt 3.1 version. In the latter case, I simply installed the vdsm-hook-nestedvt package from oVirt’s nightly repository, and it worked fine with the current stable version of vdsm.

ovirtonovirt

I mentioned above that I’ve been able to test oVirt on oVirt in this way, and performance hasn’t been remarkably bad, but I wanted to get a better handle on the performance hit of nesting. I settled, unscientifically, on running mock builds of the ovirt-engine source package, a real life task that involves CPU and I/O work.

I ran the build operation four times on a VM running under oVirt, and four times on a VM running under an oVirt instance which was itself running under oVirt. I outfitted both the nested and the non-nested VM with 4GB of RAM and two virtual cores. I was using the same physical machine for both VMs, but I ran the tests one at a time, rather than in .parallel.

The four builds on the “real” VM averaged out to 14 minutes, 15 seconds, and the build quartet on the nested VM averaged 28 minutes, 18 seconds. So, I recorded a definite performance hit with the nested virtualization, but not a big enough hit to dissuade me from further nested KVM exploration.

Speaking of further exploration, I’m looking very forward to attending the next oVirt Workshop later this month, which will take place at NetApp’s Sunnyvale campus from Jan 22-24.

If you’re in the Bay Area and you’d like to learn more about oVirt, I’d love to see you there. The event is free of charge (much like oVirt itself) and all the agenda and registration details are available on the oVirt project site at http://www.ovirt.org/NetApp_Workshop_January_2013. Registration closes on Jan 15th, so get on it!

engine-iso-uploader wrinkles

I’ve been installing oVirt 3.1 on some shiny new lab equipment, and I came across a pair of interesting snags with engine-iso-uploader, a tool you can use to upload iso images to your oVirt installation.

I installed the tool on a F17 client machine and festooned the command with the many arguments required to send an iso image off through the network to the iso domain of my oVirt rig. The command failed with the message, “ERROR:root:mount.nfs: Connection timed out.”

I had an idea what might be wrong. The iso domain I set up is hosted by Gluster, and exposed via Gluster’s built-in NFS server, which only supports NFSv3. Fedora 17 is set by default to require NFSv4, and when I changed /etc/nfsmount.conf to make Nfsvers=3, I got around that NFS error — only to hit another, weirder error: “ERROR: A user named vdsm with a UID and GID of 36 must be defined on the system to mount the ISO storage domain on iso1 as Read/Write.”

Vdsm is the daemon that runs oVirt virtualization hosts, so vdsm needs to be able to read and write to the storage domains. I was surprised, though, that the client machine I was using to upload an iso had to have its own vdsm user to do the job. Anyway, I created the vdsm user with the 36.36 IDs, and the command worked.

Engine-iso-uploader does its business with NFS by default, but there’s another option to upload via ssh, which, I imagine, would avoid the need for that vdsm user. I gave it a quick try, hit a new error, ERROR: Error message is “unable to test the available space on /iso1”, and shelved further messing around w/ the tool, for now.

My favored method for getting iso images into my iso domain remains mounting the NFS share and dropping them in there. What I’d really like to see is a way to do this straight from the oVirt web admin console.

 

oVirt 3.1, Glusterized

One of the cooler new features in oVirt 3.1 is the platform’s support for creating and managing Gluster volumes. oVirt’s web admin console now includes a graphical tool for configuring these volumes, and vdsm, the service for responsible for controlling oVirt’s virtualization nodes, has a new sibling, vdsm-gluster, for handling the back end work.

Gluster and oVirt make a good team — the scale out, open source storage project provides a nice way of weaving the local storage on individual compute nodes into shared storage resources.

To demonstrate the basics of using oVirt’s new Gluster functionality, I’m going to take the all-in-one engine/node oVirt rig that I stepped through recently and convert it from an all-on-one node with local storage, to a multi-node ready configuration with shared storage provided by Gluster volumes that tap the local storage available on each of the nodes. (Thanks to Robert Middleswarth, whose blog posts on oVirt and Gluster I relied on while learning about the combo.)

The all-in-one installer leaves you with a single machine that hosts both the oVirt management server, aka ovirt-engine, and a virtualization node. For storage, the all-in-one setup uses a local directory for the data domain, and an NFS share on the single machine to host an iso domain, where OS install images are stored.

We’ll start the all-in-one to multi-node conversion by putting our local virtualization host, local_host, into maintenance mode by clicking the Hosts tab in the web admin console, clicking the local_host entry, and choosing “Maintenance” from the Hosts navigation bar.

Once local_host is in maintenance mode, we click edit, change to the Default data center and host cluster from the drop down menus in the dialog box, and then hit OK to save the change.

This is assuming that you stuck with NFS as the default storage type while running through the engine-setup script. If not, head over to the Data Centers tab and edit the Default data center to set “NFS” as its type. Next, head to the Clusters tab, edit your Default cluster, fill the check box next to “Enable Gluster Service,” and hit OK to save your changes. Then, go back to the Hosts tab, highlight your host, and click Activate to bring it back from maintenance mode.

Now head to a terminal window on your engine machine. Fedora 17, the OS I’m using for this walkthrough, includes version 3.2 of Gluster. The oVirt/Gluster integration requires Gluster 3.3, so we need to configure a separate repository to get the newer packages:

# cd /etc/yum.repos.d/
# wget http://repos.fedorapeople.org/repos/kkeithle/glusterfs/fedora-glusterfs.repo

Next, install the vdsm-gluster package, restart the vdsm service, and start up the gluster service:

# yum install vdsm-gluster
# service vdsmd restart
# service glusterd start

The all-in-one installer configures an NFS share to host oVirt’s iso domain. We’re going to be exposing our Gluster volume via NFS, and since the kernel NFS server and Gluster’s NFS server don’t play well nicely together, we have to disable the former server.

# systemctl stop nfs-server.service && systemctl disable nfs-server.service

Through much trial and error, I found that it was also necessary to restart the wdmd service:

# systemctl restart wdmd.service

In the move from v3.0 to v3.1, oVirt dropped its NFSv3-only limitation, but that requirement remains for Gluster, so we have to edit /etc/nfsmount.conf and ensure that Defaultvers=3, Nfsvers=3, and Defaultproto=tcp.

Next, edit /etc/sysconfig/iptables to add the firewall rules that Gluster requires. You can paste the rules in just before the reject lines in your config.

# glusterfs
-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38467 -j ACCEPT

Then restart iptables:

# service iptables restart

Next, decide where you want to store your gluster volumes — I store mine under /data — and create this directory if need be:

# mkdir /data

Now, head back to the oVirt web admin console, visit the Volumes tab, and click Create Volume. Give your new volume a name, and choose a volume type from the drop down menu. For our first volume, let’s choose Distribute, and then click the Add Bricks button. Add a single brick to the new volume by typing the path you desire into the the Brick Directory field, clicking Add, and then OK to save the changes.

Make sure that the box next to NFS is checked under Access Protocols, and then click OK. You should see your new volume listed — highlight it and click Start to start it up. Follow the same steps to create a second volume, which we’ll use for a new ISO domain.

For now, the Gluster volume manager neglects to set brick directory permissions correctly, so after adding bricks on a machine, you have to return to the terminal and run chown -R 36.36 /data (assuming /data is where you are storing your volume bricks) to enable oVirt to write to the volumes.

Once you’ve set your permissions, return to the Storage tab of the web admin console to add data and iso domains at the volumes we’ve created. Click New Domain, choose Default data center from the data center drop down, and Data / NFS from the storage type drop down. Fill the export path field with your engine’s host name and the volume name from the Gluster volume you created for the data domain. For instance: “demo1.localdomain:/data”

Wait for data domain to become active, and repeat the above process for the iso domain. For more information on setting up storage domains in oVirt 3.1, see the quick start guide.

Once the iso domain comes up, BAM, you’re Glusterized. Now, compared to the default all-in-one install, things aren’t too different yet — you have one machine with everything packed into it. The difference is that your oVirt rig is ready to take on new nodes, which will be able to access the NFS-exposed data and iso domains, as well as contribute some of their own local storage into the pool.

To check this out, you’ll need a second test machine, with Fedora 17 installed (though you can recreate all of this on CentOS or another Enterprise Linux starting with the packages here). Take your F17 host (I start with a minimal install), install the oVirt release package, download the same fedora-glusterfs.repo we used above, and make sure your new host is accessible on the network from your engine machine, and vice versa. Also, the bug preventing F17 machines running a 3.5 or higher kernel from attaching to NFS domains isn’t fixed yet, so make sure you’re running a 3.3 or 3.4 version of the kernel.

Head over to the Hosts tab on your web admin console, click New, supply the requested information, and click OK. Your engine will reach out to your new F17 machine, and whip it into a new virtualization host. (For more info on adding hosts, again, see the quick start guide.)

Your new host will require most of the same Glusterizing setup steps that you applied to your engine server: make sure that vdsm-gluster is installed, edit /etc/nfsmount.conf, add the gluster-specific iptables rules and restart iptables, create and chown 36.36 your data directory.

The new host should see your Gluster-backed storage domains, and you should be able to run VMs on both hosts and migrate them back and forth. To take the next step and press local storage on your new node into service, the steps are pretty similar to those we used to create our first Gluster volumes.

First, though, we have to run the command “gluster peer probe NEW_HOST_HOSTNAME” from the engine server to get the engine and it’s new buddy hooked up Glusterwise (this another of the wrinkles I hope to see ironed out soon, taken care automatically in the background).

We can create a new Gluster volume, data1, of the type Replicate. This volume type requires at least two bricks, and we’ll create one in the /data directory of our engine, and one in the /data directory of our node. This works just the same as with the first Gluster volume we set up, just make sure that when adding bricks, you select the correct server in the drop down menu:

Just as before, we have to return to the command line to chown -R 36.36 /data on both of our machines to set the permissions correctly, and start the volumes we’ve created.

On my test setup, I created a second data domain, named data1, stored on the replicated Gluster domain, with the storage path set to localhost:/data1, on the rationale that VM images stored on the data1 domain would stay in sync across the pair of hosts, enabling either of my hosts to tap local storage for running a particular VM image. But I’m a newcomer to Gluster, so consult the documentation for more clueful Gluster guidance.

Too Fast, Too Slow

Yesterday I removed Fedora 17 from the server I use for oVirt testing, mainly, because I’ve been experiencing random reboots on the server, and I haven’t been able to figure out why. I’m pretty sure I wasn’t having these issues on Fedora 16, but I can’t go back to that release because the official packages for oVirt are built only for F17. There are, however, oVirt packages built for Enterprise Linux (aka RHEL and its children), and I know that some in the oVirt community have been running with these packages with success.

So, I figured I’d install CentOS 6 on my machine and either escape my random reboots or, if the reboots continued, learn that there’s probably something wrong with my hardware. Plus, I’d escape a second bug I’ve been experiencing with Fedora 17, the one in which a recent rebase to the Linux 3.5 kernel (F17 shipped originally with a 3.3 kernel) seems to have broken oVirt’s ability to access NFS shares, thereby breaking oVirt.

Installing oVirt 3.1 on CentOS 6 went very smoothly — the steps involved were pretty much the same as those for Fedora. Before I knew it, I was back up and running with a CentOS-based oVirt 3.1 rig just like my F17 one, complete with my F17 test server template and my F17 VMs for my in-progress gluster/ovirt integration writeup, all repatriated from my oVirt export domain.

However… all is not well.

My Fedora 17 VMs aren’t running normally on my new CentOS 6 host, and what I’m seeing reminds me of a bug I encountered several weeks ago when I first upgraded from oVirt 3.0 to the oVirt 3.1 beta. The solution came in the form of a bugfix from the qemu project upstream — that’s a real benefit of running a leading edge distro like Fedora — when issues are fixed upstream, you don’t have to wait forever for them to float along to you.

Also, the closer you are to upstream, the faster you get access to new features. Not long after updating my qemu to address the F16/F17 VM booting issue, I took to running qemu packages even closer to upstream, from the Fedora Virtualization Preview repository. The oVirt 3.1 management engine supports live snapshots, but requires at least qemu 1.1, which is slated for Fedora 18.

Of course, the downside of tracking the leading edge is that with frequent changes come frequent opportunities for breakage. The changes that don’t directly address your pain points are pure downside, like the NFS-disabling kernel rebase I mentioned earlier. Too fast versus too slow.

So what now?

I wasn’t experiencing these random reboots on my other F17 system — my Thinkpad X220, which I’ve pressed into service as a second oVirt node. I have this F17-based node hooked up to my el6 oVirt engine, and if I set my Fedora VMs to launch only on this node, they run just fine. This machine has only 8GB of RAM, though, and that limits how many VMs I can run on it. Also, since my F17 and el6 nodes are running different versions of qemu, live migration between them doesn’t work.

  • I could shift my in-progress ovirt/gluster testing to el6, VMs of which run just fine with the older qemu, but I’d prefer to keep testing with Fedora, and the newest code.
  • I could, instead of hitting the brakes and running el6 on my test server, hit the gas and throw F18 on there. Maybe that’d solve my random reboot issue, though I’m not sure if my disabled NFS travails would follow me forward.
  • I could figure out how to rebuild the new qemu packages on el6. I’ve started down this path already, but rpmbuild is voicing some complaints that seem related to systemd, which F17 uses and el6 does not.
  • I could find out that my random reboot problems weren’t the fault of F17 after all, which would send me poring over my hardware and possibly returning to F17.

For now, I’m going to play some more with updating my qemu on el6, while squeezing my F17 VMs into my smaller F17-based node to get this ovirt/gluster howto finished.

Then maybe I’ll take a long walk on the beach and meditate on the merits of too slow versus too fast in Linux distros, and ponder whether the Giants will sweep the Astros tonight.

Update: I was able to rebuild Fedora qemu 1.1 packages for CentOS. I commented out some systemd-dependent stuff from the spec file. I had to rebuild a couple of other packages, too, which I found in Fedora’s buildsystem. Now, my Fedora 17 VMs run well on my CentOS 6 oVirt host (which hasn’t randomly rebooted yet), and I can migrate VMs between it and my F17-based node.

And the Giants won, too.

 

Up and Running with oVirt, 3.1 Edition

Update: I’ve written an updated version of this guide for oVirt 3.2.

Last February or so, I wrote a post about getting up and running with oVirt, the open source virtualization management project, on a single test machine. Various things have changed since then, such as a shiny new oVirt 3.1 release, so I’m going to update the process in this post.

What you need:

A test machine, ideally an x86_64 system with multiple cores, hardware virtualization extensions and plenty of RAM (like 4GB or more). The default OS for oVirt 3.1 is Fedora 17, and that’s what I’ll be writing about here. Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from.

UPDATE: For my Fedora oVirt installs, I’ve been using a minimal install of Fedora, which is an option if you start from the DVD or network install images. I interact with my minimal installs via ssh. If you’re using a minimal install with ssh, my instructions work just fine. However, if you start from the default Fedora LiveCD media, you’ll need to take a couple of extra steps. You must disable NetworkManager: (sudo systemctl stop NetworkManager.service && sudo systemctl disable NetworkManager.service), you must enable sshd: (sudo systemctl start sshd && sudo systemctl enable sshd), and then reboot for good measure before proceeding with the rest of the steps.

 (BUG NOTE: With the latest Fedora 17 kernel, I’m hitting https://bugzilla.redhat.com/show_bug.cgi?id=845660, preventing nfs domains from attaching, so for now, you’ll want to run a previous fedora kernel. (BUG NOTE NOTE: This bug, at long last, is just about squashed. Stay tuned.))

 

The package vdsm-4.10.0-10 squashed the above bug dead. Make sure you’re up to date w/ it to avoid issues w/ post 3.5 Fedora kernels.

(A NEW BUG NOTE: There’s a new, 3.2 version of ovirt-engine-sdk in the Fedora 17 update repo. The oVirt 3.1 packages that depend on the sdk don’t call specifically for version 3.1, but they appear not to work with 3.2. For now, you must downgrade to the 3.1 version of the sdk in order for the all-in-one installer and other features to work properly: “yum downgrade ovirt-engine-sdk” I’ve filed a bug, here: https://bugzilla.redhat.com/show_bug.cgi?id=869457 — you can cc yourself on the bug for progress updates.)

 

All-in-One Install:

oVirt 3.1 now includes an installer plugin for setting up the sort of single machine installation I wrote about previously. It’s good for testing out oVirt, and if you want to expand from your single machine install to cover additional nodes and storage, you can do that. Read on for the steps involved, and/or watch this handy screencast I made of the process:

[youtube:http://www.youtube.com/watch?v=Aq3ctFhBIhk%5D

1. Install the ovirt-release package on your Fedora 17 machine: “yum install http://www.ovirt.org/releases/ovirt-release-fedora.noarch.rpm”

2. Install the ovirt-engine all-in-one package: “yum install ovirt-engine-setup-plugin-allinone”

2a. As pointed out by oVirt community member Adrián, in the comments below, you can ensure that the install script allows enough time for the host to add itself by editing “/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py” to make the “waitForHostUp timeout larger, like so:

def waitForHostUp():
utils.retry(isHostUp, tries=40, timeout=300, sleep=5)

3. Run engine-setup: “engine-setup” and answer all the questions.

I’ve found that the all-in-one installer sometimes times out during the install process. If the script times out during the final “AIO: Adding Local host (This may take several minutes)” step, you can proceed to the web admin console to complete the process. If it times out at an earlier point, like waiting for the jboss-as server to start, you should run “engine-cleanup” and then re-run “engine-setup”.

4. When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, take a look at the “Storage” and “Hosts” tabs. If the all-in-one process completed, you should see a host named “local_host” with a status of “Up” under Hosts, and you should see a storage domain named “local_host-Local” under “Storage.”

If your local_host is still installing, you’ll need to wait for it to finish before proceeding. You should be able to view its progress from the events panel at the bottom of the console interface. Once the host is finished installing, click on your “local_host” and hit the “Maintenance” link to put it into maintenance mode. Once your host is in maintenance mode, you’ll be able to click on the “Configure Local Storage” link, where you enter the same local storage path you entered into the engine-setup script, and then hit “OK.”

5. Once the configure local storage process is complete (whether this was taken care of during engine-setup, or if you had to do it manually in step 4) click on the storage tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to, uh, activate it.

6. Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. For your viewing pleasure, here’s another screencast, about creating VMs on oVirt:

[youtube:http://www.youtube.com/watch?v=C4gayV6dYK4%5D

Beyond All in One (or skipping it all together):

Installing: A “regular” multi-machine install of oVirt works in pretty much the same way, except that in step two, you simply install “yum install ovirt-engine” and during the “engine-setup” process, you won’t be asked about installing VDSM or a local data domain on your engine. I typically skip creating an iso domain on my engine, as I use a separate NAS device for my iso domain needs.

The local data center, cluster and storage domain created as part of the all-in-one installation option are only accessible to the virtualization host installed locally on the engine. Shifting to a multi-machine setup involves moving that local host to the Default datacenter and cluster, which starts with putting the host into maintenance mode, clicking edit, and switching the Data Center and Cluster values to “Default” (or to another, non-local set of data center and cluster values).

Hosts: Once the setup script is finished, you can head over to the web admin console to add hosts and storage domains. oVirt hosts can be either regular Fedora 17 boxes or machines installed with oVirt Node. In either case, you add one of these machines as an oVirt host by clicking “New” under the “Hosts” tab in the web admin console, and providing a name, IP address (or host name) and root password for your host-to-be, and clicking OK. A dialog will complain about configuring power management, but it’s not strictly required.

When adding an oVirt Node-based system as a host, you can also provide the ovirt-engine address and admin password in the admin interface of the node, which will add the node to your ovirt-engine server, pending approval through the web admin console.

Storage: A multi-machine setup requires a shared storage domain, such as one backed by NFS or iSCSI. Setting up an NFS storage domain involves clicking “New Domain” on the “Storage” tab, giving the new data domain a name and configuring its export path. Setting up an iSCSI domain is similar, but involves entering the IP address of your iSCSI target, discovering available LUNs, and selecting one to use.

When Things Go Wrong:

A few things to do/check when things go wrong.

1. Put selinux into permissive mode: “setenforce 0” I run my systems with selinux enabled, but there are sometimes selinux-related bugs. Putting your test system into permissive mode will get you past the errors.

2. Check the logs:

  • ovirt-engine install log lives at /var/log/ovirt-engine/engine-setup*.log
  • jboss app server logs live at /var/log/ovirt-engine/boot.log and /var/log/ovirt-engine/server.log
  • ovirt-engine logs live at /var/log/ovirt-engine/engine.log — you can tail -f /var/log/ovirt-engine/engine.log to watch what the engine is doing
  • vdsm logs live (on each virt host) at /var/log/vdsm/vdsm.log — you can watch these to see what’s going on with individual virt hosts

3. Visit us at #ovirt on OFTC. My handle there is jbrooks. If you don’t get an answer there, send a message to users@ovirt.org.

Faking It:

I mentioned right at the top that if you want to test oVirt virtualization, you need a machine with hardware virtualization extensions. The oVirt management engine can live happily within a VM, but for hosting VMs, you need those extensions.

While most physical machines these days come with those extensions, virtual machines don’t have them. There’s such a thing as nested KVM virtualization, but it’s tricky to set up and pretty unstable when you can set it up.

There is a way to test out oVirt without hardware virtualization extensions, but the catch is that you can’t actually run any VMs on one of these “fake” installs. Why bother? Well, there’s a lot to test and see in oVirt that falls short of running VMs–I made my whole installing oVirt hotwo video on a VM running inside of my real oVirt rig, for instance. You can get a feel for installing hosts and configuring storage, and managing Gluster volumes (a topic I haven’t covered here, but will, soon, in another post, till then see here for more info on oVirt/Gluster). For more on oVirt w/ Gluster, see here.

For the all-in-one setup instructions above, right after step 2:

  • install the “fake qemu” package (yum install vdsm-hook-faqemu)
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = false to fake_kvm_support = true
  • replace the contents of the of /usr/share/vdsm-bootstrap/vds_bootstrap.py (it’ll be there post step 2) with the file at http://gerrit.ovirt.org/cat/5611%2C3%2Cvds_bootstrap/vds_bootstrap.py%5E0
  • continue to step 3

That vds_bootstrap.py step shouldn’t be required, and I’m going to file a bug about it as soon as I finish this post. For more information on this topic, see: http://wiki.ovirt.org/wiki/Vdsm_Developers#Fake_KVM_Support.

If you’re trying to configure a separate fake host, for now, you’ll need to do it on a regular vdsm (not oVirt Node) host, though this should soon change. But, for your regular host, before trying to add the host through the oVirt web admin console:

  • run “yum install vdsm-hook-faqemu vdsm”
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = false to fake_kvm_support = true

Either way, you’ll need that modded /usr/share/vdsm-bootstrap/vds_bootstrap.py file on your engine, and you only have to change this file once, until/unless a future package update restores the faqemu-ignorant file.

Screencasting oVirt

There’s work underway over at the oVirt Project to produce some screencasts of the open source virtualization management platform in action. Since you can find oVirt in action each day in my home office, I set out to chip in and create an oVirt screencast, using tools available on my Fedora 17 desktop.

Here’s the five minute screencast, which focuses on creating VMs on oVirt, with a bit of live migration thrown in:

The first step was getting my oVirt test rig into shape. I’m running oVirt 3.1 on a pair of machines: a quad core Xeon with 16GB of RAM and a couple of SATA disks, and my Thinkpad X220, with its dual core processor and 8GB of RAM. I’ve taken to running much of my desktop-type tasks on a virtual machine running under oVirt, thereby liberating my Thinkpad to serve as a second node, for live migration and other multi-node-needin’ tests. Both machines run the 64-bit flavor of Fedora 17.

For storage, I’ve taken to using a pair of Gluster volumes, with bricks that reside on both of my oVirt nodes, which consume the storage via NFS. I also use a little desktop NAS device, an Iomega StorCenter ix2-200, for hosting install images and iSCSI disks.

For the screencasting, I started out with the desktop record feature that’s built into GNOME Shell. It’s really easy to use, hit control-shift-alt R to start recording, and the same combo to stop. After a couple of test recordings, however, I found that when I loaded the WebM-formatted video files that the GNOME feature produces into a video editor (I tried with PiTiVi and with OpenShot) only the first second of the video would load.

Rather than delve any deeper into that mystery, I swapped screencasting tools, opting for gtk-recordMyDesktop (yum install gtk-recordmydesktop), which produces screencasts, in OGV format, that my editing tools were happy to import properly.

I started out editing with PiTiVi–I didn’t intend to do too much editing, but I did want to speed up the video during parts of the recording that didn’t directly involve oVirt, such as the installation process for the TurnKeyLinux WordPress appliance I used in the video. I was aiming for no more than five minutes with this, and I hate it when screencasts include a bunch of semi-dead space. I found, however, that PiTiVi doesn’t offer this feature, so I switched over to OpenShot, which is available for Fedora in the RPM Fusion repositories.

I played back my recording in the OpenShot preview window, and when I came to a spot where I wanted to speed things up, I made a cut, played on to the end of the to-be-sped section, and made a second cut, before right clicking on the clip, choosing how much to accelerate it, and then dragging the following bit of video back to fill the gap.

However, I found that my cuts were getting out of sync–I’d zoom in to frame-by-frame resolution, make my cut exactly where I wanted it, and then when I watched it back, the cut wasn’t where I’d made it. I don’t know if it was an issue with the cut, or a problem with the preview function, but again, I didn’t want to delve too deeply here, so I asked the Great Oracle of Google what the best video format was for use with OpenShot. MPEG4, it answered, in the ragged voice of some forum post or something.

Fine. Back to the command line to install another tool: Transmageddon Video Converter. I know that you can do anything with ffmpeg on the command line, but I find the GUI-osity of Transmageddon, which I’ve used at some point in the past, easier than searching around for the correct ffmpeg arguments. So, bam, from OGV to MP4, and, indeed, OpenShot appeared to prefer the format swap. My cuts worked as expected.

I ended the video with a screen shot from the oVirt web site, stretched over a handful of seconds, and I exported the video, sans audio, for the narration step in the process. I played the video back in GNOME Mplayer (for some reason, my usual video player, Totem, kept crashing on me) and used Audacity (an absolutely killer piece of open source software, with support for Linux, Win and OS X) to record my audio. I used the microphone on my webcam–not exactly high end stuff–which picked up some annoying background noise.

Fortunately, Audacity comes with a pretty sweet noise removal feature–you highlight a chunk of audio with no other sound but the background noise, and tell Audacity to excise that noise from the whole recording. I thought it worked pretty well, considering.

With my audio exported (I chose FLAC) I brought it into OpenShot, did a bit of dragging around to sync things right, extended the video chunks at beginning and the end of the piece to make way for my opening and closing remarks, and exported the thing, opting for what OpenShot identified as a “Web” profile. I uploaded the finished screencast to YouTube, and there it is.

Looking Ahead to oVirt 3.1

We’re about one week away from the release of oVirt 3.1, and I’m getting geared up by sifting through the current Release Notes Draft, in search of what’s working, what still needs work, and why one might get excited about installing or updating to the new version.

Web Admin

In version 3.1, oVirt’s web admin console has picked up a few handy refinements, starting with new “guide me” buttons and dialogs sprinkled through the interface. For example, when you create a new VM through the web console, oVirt doesn’t automatically add a virtual disk or network adapter to your VM. You add these elements through a secondary settings pane, which can be easy to overlook, particularly when you’re getting started with oVirt. In 3.1, there’s now a “guide me” window that suggests adding the nic and disk, with buttons to press to direct you to the right places. These “guide me” elements work similarly elsewhere in the web admin console, for instance, directing users to the next right actions after creating a new cluster or adding a new host.

Storage

Several of the enhancements in oVirt 3.1 involve the project’s handling of storage. This version adds support for NFSv4 (oVirt 3.0 only supported NFSv3), and the option of connecting external iSCSI or FibreChannel LUNs directly to your VMs (as opposed to connecting only to disks in your data or iso domains.

oVirt 3.1 also introduces a slick new admin console for creating and managing Gluster volumes, and support for hot-pluggable disks (as well as hot pluggable nics). With the Gluster and hotplug features, I’ve had mixed success during my tests so far–there appear to be wrinkles left to iron out among the component stacks that power these features.

Installer

One of the 3.1 features that most caught my eye is proof-of-concept support for setting up a whole oVirt 3.1 install on a single server. The feature, which is packaged up as “ovirt-engine-setup-plugin-allinone” adds the option to configure your oVirt engine machine as a virtualization host during the engine-setup process. In my tests, I’ve had mixed success with this option during the engine-setup process–sometimes, the local host configuration part of the setup fails out on me.

Even when the engine-setup step hasn’t worked for me, I’ve had no trouble adding my ovirt-engine machine as a host by clicking the “Hosts” tab in the web admin console, choosing the menu option “New,” and filling out information in the dialog box that appears. All the Ethernet bridge fiddling required from 3.0 (see my previous howto) is now handled automatically, and it’s easy to tap the local storage on your engine/host machine through the “Configure Local Storage” menu item under “Hosts.”

Another new installer enhancement offers users the option of tapping a remote postgres database server for storing oVirt configuration data, in addition to the locally-hosted postgres default.

oVirt 3.1 now installs with an HTTP/HTTPS proxy that makes oVirt engine (the project’s management server) accessible on ports 80/443, versus the 8080/8443 arrangement that was the default in 3.0. This indeed works, though I found that oVirt’s proxy prevented me from running FreeIPA on the same server that hosts the engine. Not the end of the world, but engine+identity provider on the same machine seemed like a good combo to me.

Along similar lines, oVirt 3.1 adds support for Red Hat Directory Server and IBM Tivoli Directory Server as identity providers, neither of which I’ve tested so far. I’m interested to see if the 389 directory server (the upstream for RHDS) will be supported as well.

preupgrade gone wrong

Having reached a good break point in my Gluster/Openstack/Fedora tests, I thought I’d preupgrade the F16 VM I’ve been using for ovirt engine to F17, en route to the oVirt 3.1 beta.

That didn’t go so well. During the post-preupgrade part (uh, the upgrade), the installer balked at upgrading the jboss-as package that shipped with oVirt 3.0. Afterward, the VM wouldn’t boot correctly.

Fortunately, I was prepared for failure, detaching my iso domain in advance, and shuttling the templates and VMs I wanted to keep to the export domain, which I also detached.

Community Metrics Wrangling with MLstats and OpenShift

As loyal readers of this blog (if any such creatures did exist) might have noticed, I’ve been working with the oVirt project, which got a reboot last year when Red Hat finished open sourcing and porting to Java the previously .Net-based management for its enterprise virtualization product.

Given the new start for oVirt, the project has been keen to get a handle on its community metrics, such as mailing list activity: is it growing, what’s the mix of people coming from companies on the oVirt board compared to other organizations and individuals, and so on.

While searching for suitable tools for mailing list analysis, I came across mlstats, a nifty application for slurping mailing list archives into a database, where they can be queried and made to cough up all sorts of interesting information. The application is really easy to use, and I had it up and running on my local machine, offering up pearls of oVirt mailing list wisdom, in no time at all.

I wanted to set the app up on a remote server somewhere, complete with a cron job for running the daily stats update, and with some means of displaying certain information from the db, such as daily traffic on each of the oVirt lists. I opted to deploy this mlstats-o-matic on OpenShift, the Red Hat Platform as a Service, uh, service, on which I run this blog. OpenShift instances support mysql and cron, and the service’s port forwarding feature is great for enabling database access via one’s favored desktop query tools. More on that later.

The cron job I set up on OpenShift referenced a text file at containing a list of the mailing list archive page URLs for oVirt, for mlstats to chug through and deposit into the database. A second cron job ran a list of sql queries, outputting the results in csv form, which I then charted on static web pages, using the javascript library dygraphs. To make new charts, I could build additional queries and web pages, and reference them from cron jobs.

That worked fairly well for our initial oVirt needs, but since the team I work on at Red Hat is focused on working with a broad range of open source projects, I wanted to figure out how to make it easy to use this same framework with other projects. The queries and the web pages were hard coded to the particular lists I’d chosen, so my initial take wasn’t a very cleanly repeatable solution.

More hackery ensued, and I changed the setup around to use bash scripts to assemble the queries and html pages based on whatever list of mailing lists a user might select. I’m sure I could have done it all much more elegantly, but it’s all up at github where anyone is free to set me straight. 🙂

I set the whole thing up as an OpenShift quickstart, so if you’re interested in engaging in some mailing list analysis action of your own, head over to Github and check it out. I made a short screencast of the process to give you an idea of how easy it is to get up and running:

The cron job will go off and update the data each day, and there really isn’t any maintenance to do. If you want to get at the data with your desktop mysql query tools, the command “rhc-port-forward -a appname” will make the database at OpenShift accessible to you locally.

Please give it a shot, and let me know how it works for you!