Tag Archives: ovirt

Community Metrics Wrangling with MLstats and OpenShift

As loyal readers of this blog (if any such creatures did exist) might have noticed, I’ve been working with the oVirt project, which got a reboot last year when Red Hat finished open sourcing and porting to Java the previously .Net-based management for its enterprise virtualization product.

Given the new start for oVirt, the project has been keen to get a handle on its community metrics, such as mailing list activity: is it growing, what’s the mix of people coming from companies on the oVirt board compared to other organizations and individuals, and so on.

While searching for suitable tools for mailing list analysis, I came across mlstats, a nifty application for slurping mailing list archives into a database, where they can be queried and made to cough up all sorts of interesting information. The application is really easy to use, and I had it up and running on my local machine, offering up pearls of oVirt mailing list wisdom, in no time at all.

I wanted to set the app up on a remote server somewhere, complete with a cron job for running the daily stats update, and with some means of displaying certain information from the db, such as daily traffic on each of the oVirt lists. I opted to deploy this mlstats-o-matic on OpenShift, the Red Hat Platform as a Service, uh, service, on which I run this blog. OpenShift instances support mysql and cron, and the service’s port forwarding feature is great for enabling database access via one’s favored desktop query tools. More on that later.

The cron job I set up on OpenShift referenced a text file at containing a list of the mailing list archive page URLs for oVirt, for mlstats to chug through and deposit into the database. A second cron job ran a list of sql queries, outputting the results in csv form, which I then charted on static web pages, using the javascript library dygraphs. To make new charts, I could build additional queries and web pages, and reference them from cron jobs.

That worked fairly well for our initial oVirt needs, but since the team I work on at Red Hat is focused on working with a broad range of open source projects, I wanted to figure out how to make it easy to use this same framework with other projects. The queries and the web pages were hard coded to the particular lists I’d chosen, so my initial take wasn’t a very cleanly repeatable solution.

More hackery ensued, and I changed the setup around to use bash scripts to assemble the queries and html pages based on whatever list of mailing lists a user might select. I’m sure I could have done it all much more elegantly, but it’s all up at github where anyone is free to set me straight. :)

I set the whole thing up as an OpenShift quickstart, so if you’re interested in engaging in some mailing list analysis action of your own, head over to Github and check it out. I made a short screencast of the process to give you an idea of how easy it is to get up and running:

The cron job will go off and update the data each day, and there really isn’t any maintenance to do. If you want to get at the data with your desktop mysql query tools, the command “rhc-port-forward -a appname” will make the database at OpenShift accessible to you locally.

Please give it a shot, and let me know how it works for you!

 

oVirt or No Virt: Notebook Edition

oVirt is definitely not intended to be run on your notebook, and running something oriented toward powering whole data centers on a single, portable machine seems like overkill, anyway. For a Linux-powered notebook machine like mine, virt-manager is a great tool for spinning up all manner of VMs, and–while I’ve yet to get it running properly — GNOME Boxes offers another promising option for taking advantage of the KVM hypervisor that’s built into the Linux kernel.

However, since immersing myself in oVirt is part of my job now, and since I work with a lot of VMs on my work notebook, I wanted to see if I could come up with a notebook-friendly oVirt setup. The trouble with the single-machine rig that I described in my recent oVirt hotwo is that setting up the bridged networking that oVirt requires means disabling NetworkManager, the handy service that makes it easy to connect to VPNs and switch between WiFi connections. I wanted to avoid disabling NetworkManager.

I spent a bit of time fiddling with nested KVM — running an oVirt rig from within a VM on my machine. I was able to get this guest-within-a-guest setup working, but it was slow and unstable. Again, more sacrifice than I was willing to countenance for this.

In the end, I got my notebook-based, NetworkManager-friendly oVirt setup working by adapting the default guest networking configuration for virt-manager, in which libvirt provides a virtual network that acts like a NAT router for guest machines.

This was a little tricky, because when you configure a machine to be used as an oVirt virtualization host, libvirt gets commandeered by a higher-level component, vdsm (pdf), such that the default virtual bridge configuration is deactivated and you’re blocked from accessing libvirt directly.

In order to restore the virtual bridge setup, you have to provide libvirt with the user name vdsm@rhevh and the password you’ll find at /etc/pki/vdsm/keys/libvirt_password. I used the command line tool virsh to redefine the active “vdsm-ovirtmgmt” network to match my previous “default” network, and serve as a NAT router for the guest VMs on my notebook.

After making these changes, I rolled back the bridge-building changes to my ifcfg-em1 file, and deleted the ifcfg-ovirtmgmt file I’d created in step 12 of the howto. I also added the option “net.ipv4.ip_forward = 1” to /etc/sysctl.conf — without this tweak, my VMs were able to access the network as expected, but I wasn’t able to ssh into our otherwise reach my guests from my host machine.

I’ve had oVirt set up this way on my notebook for about a week now. So far, it’s been working really well. Before I shut off or suspend my notebook, I use the oVirt web admin in Firefox to put my host into maintenance mode. I also use the web admin to access the consoles of my guest machines through SPICE.

I’m using an Iomega ix2-200 desktop NAS box for NFS and iSCSI storage. Rich tools for setting up shared storage is one of the nice things about using oVirt instead of a desktop-oriented virtualization tool.

If the whole thing explodes, or just begins to droop into general suckage, I’ll update this post to reflect that. :)

 

Spice Spice Baby

Last week, when I was getting to the “here’s where you access your shiny new oVirt-hosted VM” portion of my super duper Up and Running with oVirt howto, I was a bit embarrassed to say that  you needed Fedora to access oVirt’s console-launching automagic.

oVirt uses the spice protocol for delivering virtual desktop sessions, and while spice client packages are available for Ubuntu and for openSUSE, I wasn’t able to find any up-to-date packages to provide the Firefox plugin, spice-xpi, that handles the hand off between oVirt’s web admin console and the spice client application.

So, I grabbed the source RPM for the Fedora spice-xpi package and headed off to the openSUSE Build Service and to the Ubuntu Personal Package Archives to build some packages. Now, I’m close to being a packaging n00b — historically, checkinstall has been the extent of my packaging efforts — but, I got the packages built and they work, at least on the 64-bit versions of Ubuntu 11.10 and openSUSE 12.1 with which I tested. I built x86 versions, too, but haven’t tested them yet.

Here’s a shot of the spice-xpi package doing its thing on Ubuntu 11.10:

And here’s a shot of the spice-xpi package in action on openSUSE 12.1:

So, if you were all set to test drive oVirt, but drew the line at fleeing your client distro of choice to access your VMs (and if your distro of choice happens to be Ubuntu or openSUSE) go forth and get to your oVirt tire-kicking. And if you check out the packages and have feedback that’ll help me be a better Ubuntu and openSUSE packager, I’d appreciate it if you’d share.

Also, I have at least one more soon-to-come VM access blog post in me, dealing with VM access from Windows (and maybe not dealing, but wondering aloud, at least, about access from OS X).

Finally, I want to put in a plug for an oVirt Workshop that the project is holding in Beijing, China, on March 21. If you’re Beijing-based, and you’re interested in learning more about oVirt, check out the oVirt Project site for more information, and please spread the word.

 

How to Get Up and Running with oVirt

NOTE: The most recent version of this howto, for oVirt 4.1, lives HERE.

As a fan both of x86 virtualization and of open source software, I long wondered when the “Linux of virtualization” would emerge. Maybe I should say instead, the GNU/Linux of virtualization, because I’m talking about more than just a kernel for virtualization — we’ve had those for a while now, in the forms of Xen and of KVM. Rather, I’ve been looking for the virtualization project that’ll do to VMware’s vSphere what Linux-based operating systems have done to proprietary OS incumbents: shake up the market, stoke innovation, and place the technology in many more people’s hands.

Now, I may just be biased — I work for one of the companies trying to give this technology away to those who want it (and sell it to those who’re looking for support) — but the time for that Linux of virtualization is has finally come. Last week, the oVirt Project shipped its first release since the source code for the project’s Java-based management server went public last November. After having toiled through building and configuring oVirt back in November, I’m happy to report that the process has gotten much much simpler. Plenty of work remains to be done, particularly around supporting multiple Linux distributions. However, if you have a reasonably beefy machine to test with, you can be up and running in no time. Here’s a step-by-step guide to installing a single server oVirt test rig:

Step one, get a machine with Intel VT or AMD-V hardware extensions, and at least 4GB of RAM. As with all virtualization, the more RAM you have, the better, but 4GB will do for a test rig.

Step two, grab a Fedora 16 x86_64 install disc and install Fedora.  Also, you’ll want to have a client system capable of accessing the spice-based console of your VM–for now, Fedora’s your best  there as well. (update: I did some spice-xpi packaging for Ubuntu 11:10 and openSUSE 12.1) You can access oVirt systems via VNC, as well, though that path is rougher around the edges right now.

(An aside: right now, oVirt is most closely aligned with Fedora, as the only current downstream distribution is Red Hat’s RHEV. However, getting oVirt into as many distros as possible is a priority for the project, so let’s hurry up and install this so we can get to work on Ubuntufication and openSUSEification and whatnot!)

For my Fedora 16 test machine, I went with the minimal install option, and got rid of the separate /home partition that the Fedora installer creates by default, leaving that space instead to the root partition. For networking, I stuck to dhcp.

After installing F16, start the network, set it to start in the future by default, and see what your IP address is:

service network start && chkconfig network on && ifconfig -a

From there, ssh into your machine, where it’s easier to cut and paste directions from the Web. Since I installed my system from the Fedora DVD, I yum update to install the ~79 updates that have been released since. (And remind myself for the 1000th time to look into creating a local Fedora repository)

Next, install wget (the minimal install doesn’t come with it) and grab the repository file for oVirt Stable:

yum install -y wget && wget http://www.ovirt.org/releases/stable/fedora/16/ovirt-engine.repo -P /etc/yum.repos.d/

(I’ve been setting up my test installs using the root user, if you’re logged in as a regular user, use sudo as needed)

Then, install ovirt-engine, the management server for oVirt:

yum install -y ovirt-engine

(on my minimal install, this step pulled in 100 packages)

Next, run the setup script for oVirt Engine, cleverly tucked away under the name:

engine-setup

The script asks a series of questions, and it’s safe to stick with the defaults. The script will ask for your machine’s fully-qualified domain name, and suggest its host name by default. If the name doesn’t resolve properly, the script will ask if you’re sure you want to proceed. It’s OK to proceed anyway — if you run into trouble you can work around it by modifying /etc/hosts, and for this single server config, well, your server knows where to find itself. Choose NFS as the default storage type, and let the script create an NFS iso share for you. I chose the path /mnt/iso and name ‘iso’

ovirt-setup

Type yes to proceed. When the script finishes, it’ll tell you where to reach the ovirt web interface, at port 8080 or 8443 of your management server. Before we head over there, though, let’s do a bit more storage configuration.

In keeping with the all-in-one theme of this walkthrough, we’re going to create three nfs shares on our management server: one for hosting the iso images from which we’ll install VMs; one for hosting our VMs’ hard disk images; and one for hosting a location to which we can export VMs images we may want to move between data domains. If you let the engine-setup script create an nfs share for you, you’ll see this line in /etc/exports:

/mnt/iso 0.0.0.0/0.0.0.0(rw) #rhev installer

Create two more like it:

/mnt/data 0.0.0.0/0.0.0.0(rw)
/mnt/export 0.0.0.0/0.0.0.0(rw)

Then head over to /mnt to create the data and export directories:

cd /mnt && mkdir data export

All three of our storage folders need to be owned by user vdsm and group kvm. The iso folder that the engine-setup script created is already owned by vdsm:kvm — the data and export directories we created need to match that:

chown vdsm:kvm data export

Another NFS configuration bit here. oVirt wants to mount its NFS shares in v3, not v4. You can ensure this either by disabling nfs v4 on the server side or on the client side, as described in the oVirt wiki, here. I’ve been disabling NFSv4 on my ovirt-engine boxes by adding this line to /etc/sysconfig/nfs, and then restarting the service:

NFS4_SUPPORT="no"

systemctl restart nfs-server.service

Now we have a oVirt Engine management server and three NFS shares, and we’re ready to add a host to handle the compute. Since this is a single-box install, we’re going to configure our management server as a virtualization host. This step is based on the wiki page at http://www.ovirt.org/wiki/Installing_VDSM_from_rpm. First, we install bridge-utils and create a network bridge:

yum install -y bridge-utils

vi /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt

This is the contents of my bridge config file:

DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=dhcp
NM_CONTROLLED="no"

Then, to the config file for my network adapter, I add the line:

BRIDGE=ovirtmgmt

And then, restart the network:

service network restart

(Again, this is on my minimal install. If you’re using a Fedora machine with NetworkManager enabled, you should also disable NM. Check the wiki for more info.)

Now let’s hit the oVirt Engine administrator portal, at :8080 or 8443. If your server’s host name doesn’t resolve properly, you can add an entry in the /etc/hosts file of your client to route you in the right direction. Here’s what you should see:

hello_engine

Click one of the administrator portal links, log in with the user name “admin” and the password you gave the engine-setup script. Once you’re logged in, click the “Hosts” tab, then click “New” to add a host. Give your host a name, and enter it’s address or host name, and the machine’s root password:

add_host

Once you click OK, a dialog box will inform you that you’ve not configured power management–that’s all right, just click through that.

Now, at the bottom of the screen, you can click on the up/down arrows next to “Events” to expand the events dialog and watch your management server configure itself as a host. You’ll see your server ssh in to itself, and run a bootstrap script that will install everything it needs. The last step in the script reboots the script, so if you click back to check on your progress and see “Error: A Request to the Server failed with the following Status Code: 0” that’s probably a good sign. :) (If something goes wrong during the process, you’ll see in this events area. If the event tells you to look at the log, start with the engine log at /var/logs/ovirt-engine/engine.log. Hopefully, the process will just crank to completion without incident.)

Once your server comes back from it’s reboot, you ought to be able to log in to the admin portal, click on “Hosts,” and see a happy green arrow indicating that your host is Up. Once your host is up, it’s time to hook up our storage. Click the “Storage” tab, then “New Domain.”

storage1

Give your new data domain a name (I go with “data”), choose your host from the drop-down box, and then enter the address and mount point of your NFS data share in the “Export Path” field, and click OK. Your server should mount the share and, shortly, you should see another happy green Up arrow next to your data domain.

An oVirt data center needs an active data domain before you can attach or add iso or export domains, which is why the iso domain that the engine-setup script creates starts out unattached. With your data domain in the green, you can click on that iso share, and then, in the pane that appears below the domains list, click the Data Center tab, then “Attach,” and choose your default data center to attach the iso domain to. Next, click the “Default” entry in that same pane, and “Activate.”

Adding the export domain works in just the same way as adding the data domain, just make sure that you choose the export option from the “Domain Function / Storage Type” drop down menu.

Now, let’s add an iso image from which to install a VM. We do this from the command line, using the tool, engine-iso-uploader. On my test systems, I’ve used wget to fetch an iso image (in this example, the Fedora net install image) from the Internet to my oVirt Engine machine. From the directory where I’ve downloaded the image, I issue the command:

engine-iso-uploader -i iso upload Fedora-16-x86_64-netinst.iso

The tool asks me for my admin password, the same one I use to log in to the web console, and starts uploading the image to my iso domain, which I’ve named “iso.”  (For more engine-iso-uploader guidance, see “man engine-iso-uploader”)

Once the upload is finished, I’m ready to create my VM. Click the virtual machines tab in the web admin, click new desktop (or server), give the machine a name, set the memory size, and adjust the cores, if you want. The OS list is limited right now to the RHEL and Windows options officially supported by RHEV, but I’ve installed Fedora, Ubuntu and Windows 8 without any trouble. For my F16 install, I chose RHEL 6.x x86_64 from the list:

newvm

After clicking OK in the new VM dialog, click on your new machine in the VM list, and in the secondary pane that appears below, give the VM a network interface clicking on Network Interfaces, then New, then OK:

net1 In the same way, give your VM a disk by clicking on Virtual Disks, New, enter a size, then OK:

disk1

We’re ready to install our VM. With your VM selected, click the “Run Once” button, attach your install CD, bump up cd-rom in the boot sequence, and click OK:

run_once

In order to access the console of our new VM, we’re going to need to install the Firefox extension for spice. From a Fedora 16 machine with Firefox installed, you can install the spice package with:

yum install -y spice-xpi

You may need to restart Firefox after installing the spice plugin, but once you’re up and running with it, you’ll be able to right-click on your VM and click “Console,” which will bring up the spice console for your machine. From here, install your OS normally. In the spice console, you can hit Shift-F11 to enter/exit full screen mode, and Shift-12 to release your pointer if the console has captured it.

The configuration changes you make in the “Run Once” dialog are supposed to last just once, but I’ve found that they persist until you actually shut down the machine–rebooting it once your install is complete isn’t enough.

I think that’s it — we have an all-in-one oVirt test box, complete with NFS storage and a guest machine. From here, you can add additional hosts, based on other Fedora hosts or on the project’s stripped-down oVirt Node image. You can point your additional hosts at the NFS shares we created in this runthrough, or you can add new storage domains. Consult the oVirt Installation guide for more information on installing and configuring your oVirt environments. That’s enough for this blog post, check back soon for more material on oVirt, and if you’re interested in getting involved with the project, you can find all the mailing list, issue tracker, source repository, and wiki information you need here. On IRC, I’m jbrooks, ping me in the #ovirt room on OFTC or write a comment below and I’ll be happy to help you get up and running or get pointed in the right direction.