Up and Running with oVirt, 3.1 Edition

Update: I’ve written an updated version of this guide for oVirt 3.2.

Last February or so, I wrote a post about getting up and running with oVirt, the open source virtualization management project, on a single test machine. Various things have changed since then, such as a shiny new oVirt 3.1 release, so I’m going to update the process in this post.

What you need:

A test machine, ideally an x86_64 system with multiple cores, hardware virtualization extensions and plenty of RAM (like 4GB or more). The default OS for oVirt 3.1 is Fedora 17, and that’s what I’ll be writing about here. Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from.

UPDATE: For my Fedora oVirt installs, I’ve been using a minimal install of Fedora, which is an option if you start from the DVD or network install images. I interact with my minimal installs via ssh. If you’re using a minimal install with ssh, my instructions work just fine. However, if you start from the default Fedora LiveCD media, you’ll need to take a couple of extra steps. You must disable NetworkManager: (sudo systemctl stop NetworkManager.service && sudo systemctl disable NetworkManager.service), you must enable sshd: (sudo systemctl start sshd && sudo systemctl enable sshd), and then reboot for good measure before proceeding with the rest of the steps.

 (BUG NOTE: With the latest Fedora 17 kernel, I’m hitting https://bugzilla.redhat.com/show_bug.cgi?id=845660, preventing nfs domains from attaching, so for now, you’ll want to run a previous fedora kernel. (BUG NOTE NOTE: This bug, at long last, is just about squashed. Stay tuned.))

 

The package vdsm-4.10.0-10 squashed the above bug dead. Make sure you’re up to date w/ it to avoid issues w/ post 3.5 Fedora kernels.

(A NEW BUG NOTE: There’s a new, 3.2 version of ovirt-engine-sdk in the Fedora 17 update repo. The oVirt 3.1 packages that depend on the sdk don’t call specifically for version 3.1, but they appear not to work with 3.2. For now, you must downgrade to the 3.1 version of the sdk in order for the all-in-one installer and other features to work properly: “yum downgrade ovirt-engine-sdk” I’ve filed a bug, here: https://bugzilla.redhat.com/show_bug.cgi?id=869457 — you can cc yourself on the bug for progress updates.)

 

All-in-One Install:

oVirt 3.1 now includes an installer plugin for setting up the sort of single machine installation I wrote about previously. It’s good for testing out oVirt, and if you want to expand from your single machine install to cover additional nodes and storage, you can do that. Read on for the steps involved, and/or watch this handy screencast I made of the process:

[youtube:http://www.youtube.com/watch?v=Aq3ctFhBIhk%5D

1. Install the ovirt-release package on your Fedora 17 machine: “yum install http://www.ovirt.org/releases/ovirt-release-fedora.noarch.rpm”

2. Install the ovirt-engine all-in-one package: “yum install ovirt-engine-setup-plugin-allinone”

2a. As pointed out by oVirt community member Adrián, in the comments below, you can ensure that the install script allows enough time for the host to add itself by editing “/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py” to make the “waitForHostUp timeout larger, like so:

def waitForHostUp():
utils.retry(isHostUp, tries=40, timeout=300, sleep=5)

3. Run engine-setup: “engine-setup” and answer all the questions.

I’ve found that the all-in-one installer sometimes times out during the install process. If the script times out during the final “AIO: Adding Local host (This may take several minutes)” step, you can proceed to the web admin console to complete the process. If it times out at an earlier point, like waiting for the jboss-as server to start, you should run “engine-cleanup” and then re-run “engine-setup”.

4. When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, take a look at the “Storage” and “Hosts” tabs. If the all-in-one process completed, you should see a host named “local_host” with a status of “Up” under Hosts, and you should see a storage domain named “local_host-Local” under “Storage.”

If your local_host is still installing, you’ll need to wait for it to finish before proceeding. You should be able to view its progress from the events panel at the bottom of the console interface. Once the host is finished installing, click on your “local_host” and hit the “Maintenance” link to put it into maintenance mode. Once your host is in maintenance mode, you’ll be able to click on the “Configure Local Storage” link, where you enter the same local storage path you entered into the engine-setup script, and then hit “OK.”

5. Once the configure local storage process is complete (whether this was taken care of during engine-setup, or if you had to do it manually in step 4) click on the storage tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to, uh, activate it.

6. Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. For your viewing pleasure, here’s another screencast, about creating VMs on oVirt:

[youtube:http://www.youtube.com/watch?v=C4gayV6dYK4%5D

Beyond All in One (or skipping it all together):

Installing: A “regular” multi-machine install of oVirt works in pretty much the same way, except that in step two, you simply install “yum install ovirt-engine” and during the “engine-setup” process, you won’t be asked about installing VDSM or a local data domain on your engine. I typically skip creating an iso domain on my engine, as I use a separate NAS device for my iso domain needs.

The local data center, cluster and storage domain created as part of the all-in-one installation option are only accessible to the virtualization host installed locally on the engine. Shifting to a multi-machine setup involves moving that local host to the Default datacenter and cluster, which starts with putting the host into maintenance mode, clicking edit, and switching the Data Center and Cluster values to “Default” (or to another, non-local set of data center and cluster values).

Hosts: Once the setup script is finished, you can head over to the web admin console to add hosts and storage domains. oVirt hosts can be either regular Fedora 17 boxes or machines installed with oVirt Node. In either case, you add one of these machines as an oVirt host by clicking “New” under the “Hosts” tab in the web admin console, and providing a name, IP address (or host name) and root password for your host-to-be, and clicking OK. A dialog will complain about configuring power management, but it’s not strictly required.

When adding an oVirt Node-based system as a host, you can also provide the ovirt-engine address and admin password in the admin interface of the node, which will add the node to your ovirt-engine server, pending approval through the web admin console.

Storage: A multi-machine setup requires a shared storage domain, such as one backed by NFS or iSCSI. Setting up an NFS storage domain involves clicking “New Domain” on the “Storage” tab, giving the new data domain a name and configuring its export path. Setting up an iSCSI domain is similar, but involves entering the IP address of your iSCSI target, discovering available LUNs, and selecting one to use.

When Things Go Wrong:

A few things to do/check when things go wrong.

1. Put selinux into permissive mode: “setenforce 0” I run my systems with selinux enabled, but there are sometimes selinux-related bugs. Putting your test system into permissive mode will get you past the errors.

2. Check the logs:

  • ovirt-engine install log lives at /var/log/ovirt-engine/engine-setup*.log
  • jboss app server logs live at /var/log/ovirt-engine/boot.log and /var/log/ovirt-engine/server.log
  • ovirt-engine logs live at /var/log/ovirt-engine/engine.log — you can tail -f /var/log/ovirt-engine/engine.log to watch what the engine is doing
  • vdsm logs live (on each virt host) at /var/log/vdsm/vdsm.log — you can watch these to see what’s going on with individual virt hosts

3. Visit us at #ovirt on OFTC. My handle there is jbrooks. If you don’t get an answer there, send a message to users@ovirt.org.

Faking It:

I mentioned right at the top that if you want to test oVirt virtualization, you need a machine with hardware virtualization extensions. The oVirt management engine can live happily within a VM, but for hosting VMs, you need those extensions.

While most physical machines these days come with those extensions, virtual machines don’t have them. There’s such a thing as nested KVM virtualization, but it’s tricky to set up and pretty unstable when you can set it up.

There is a way to test out oVirt without hardware virtualization extensions, but the catch is that you can’t actually run any VMs on one of these “fake” installs. Why bother? Well, there’s a lot to test and see in oVirt that falls short of running VMs–I made my whole installing oVirt hotwo video on a VM running inside of my real oVirt rig, for instance. You can get a feel for installing hosts and configuring storage, and managing Gluster volumes (a topic I haven’t covered here, but will, soon, in another post, till then see here for more info on oVirt/Gluster). For more on oVirt w/ Gluster, see here.

For the all-in-one setup instructions above, right after step 2:

  • install the “fake qemu” package (yum install vdsm-hook-faqemu)
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = false to fake_kvm_support = true
  • replace the contents of the of /usr/share/vdsm-bootstrap/vds_bootstrap.py (it’ll be there post step 2) with the file at http://gerrit.ovirt.org/cat/5611%2C3%2Cvds_bootstrap/vds_bootstrap.py%5E0
  • continue to step 3

That vds_bootstrap.py step shouldn’t be required, and I’m going to file a bug about it as soon as I finish this post. For more information on this topic, see: http://wiki.ovirt.org/wiki/Vdsm_Developers#Fake_KVM_Support.

If you’re trying to configure a separate fake host, for now, you’ll need to do it on a regular vdsm (not oVirt Node) host, though this should soon change. But, for your regular host, before trying to add the host through the oVirt web admin console:

  • run “yum install vdsm-hook-faqemu vdsm”
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = false to fake_kvm_support = true

Either way, you’ll need that modded /usr/share/vdsm-bootstrap/vds_bootstrap.py file on your engine, and you only have to change this file once, until/unless a future package update restores the faqemu-ignorant file.

Screencasting oVirt

There’s work underway over at the oVirt Project to produce some screencasts of the open source virtualization management platform in action. Since you can find oVirt in action each day in my home office, I set out to chip in and create an oVirt screencast, using tools available on my Fedora 17 desktop.

Here’s the five minute screencast, which focuses on creating VMs on oVirt, with a bit of live migration thrown in:

The first step was getting my oVirt test rig into shape. I’m running oVirt 3.1 on a pair of machines: a quad core Xeon with 16GB of RAM and a couple of SATA disks, and my Thinkpad X220, with its dual core processor and 8GB of RAM. I’ve taken to running much of my desktop-type tasks on a virtual machine running under oVirt, thereby liberating my Thinkpad to serve as a second node, for live migration and other multi-node-needin’ tests. Both machines run the 64-bit flavor of Fedora 17.

For storage, I’ve taken to using a pair of Gluster volumes, with bricks that reside on both of my oVirt nodes, which consume the storage via NFS. I also use a little desktop NAS device, an Iomega StorCenter ix2-200, for hosting install images and iSCSI disks.

For the screencasting, I started out with the desktop record feature that’s built into GNOME Shell. It’s really easy to use, hit control-shift-alt R to start recording, and the same combo to stop. After a couple of test recordings, however, I found that when I loaded the WebM-formatted video files that the GNOME feature produces into a video editor (I tried with PiTiVi and with OpenShot) only the first second of the video would load.

Rather than delve any deeper into that mystery, I swapped screencasting tools, opting for gtk-recordMyDesktop (yum install gtk-recordmydesktop), which produces screencasts, in OGV format, that my editing tools were happy to import properly.

I started out editing with PiTiVi–I didn’t intend to do too much editing, but I did want to speed up the video during parts of the recording that didn’t directly involve oVirt, such as the installation process for the TurnKeyLinux WordPress appliance I used in the video. I was aiming for no more than five minutes with this, and I hate it when screencasts include a bunch of semi-dead space. I found, however, that PiTiVi doesn’t offer this feature, so I switched over to OpenShot, which is available for Fedora in the RPM Fusion repositories.

I played back my recording in the OpenShot preview window, and when I came to a spot where I wanted to speed things up, I made a cut, played on to the end of the to-be-sped section, and made a second cut, before right clicking on the clip, choosing how much to accelerate it, and then dragging the following bit of video back to fill the gap.

However, I found that my cuts were getting out of sync–I’d zoom in to frame-by-frame resolution, make my cut exactly where I wanted it, and then when I watched it back, the cut wasn’t where I’d made it. I don’t know if it was an issue with the cut, or a problem with the preview function, but again, I didn’t want to delve too deeply here, so I asked the Great Oracle of Google what the best video format was for use with OpenShot. MPEG4, it answered, in the ragged voice of some forum post or something.

Fine. Back to the command line to install another tool: Transmageddon Video Converter. I know that you can do anything with ffmpeg on the command line, but I find the GUI-osity of Transmageddon, which I’ve used at some point in the past, easier than searching around for the correct ffmpeg arguments. So, bam, from OGV to MP4, and, indeed, OpenShot appeared to prefer the format swap. My cuts worked as expected.

I ended the video with a screen shot from the oVirt web site, stretched over a handful of seconds, and I exported the video, sans audio, for the narration step in the process. I played the video back in GNOME Mplayer (for some reason, my usual video player, Totem, kept crashing on me) and used Audacity (an absolutely killer piece of open source software, with support for Linux, Win and OS X) to record my audio. I used the microphone on my webcam–not exactly high end stuff–which picked up some annoying background noise.

Fortunately, Audacity comes with a pretty sweet noise removal feature–you highlight a chunk of audio with no other sound but the background noise, and tell Audacity to excise that noise from the whole recording. I thought it worked pretty well, considering.

With my audio exported (I chose FLAC) I brought it into OpenShot, did a bit of dragging around to sync things right, extended the video chunks at beginning and the end of the piece to make way for my opening and closing remarks, and exported the thing, opting for what OpenShot identified as a “Web” profile. I uploaded the finished screencast to YouTube, and there it is.

Upgrading the Family PC to Fedora 17, and Cinnamon

This weekend I upgraded our family PC to Fedora 17. I’ve been running this latest release for a while on my regular work machine and on my various (and generally short-lived) test systems, but I tend to be slower on the distro upgrade draw with the family computer. For me, slow usually means upgrade within two weeks of release, but this time around, it took me almost two months to undertake the upgrade.

I did try upgrading from Fedora 16 to Fedora 17 about a month ago, using Fedora’s preupgrade feature, but the preupgrade process failed for me right at the end–following the lengthy process of downloading every package needed for the upgrade–with a complaint (if I recall correctly) about grub2-tools being missing. I checked to confirm that the grub2-tools package was indeed installed before shelving the upgrade effort for a while. Even though I’m always hot to upgrade to the latest and greatest, my wife maintains a “don’t be changing my computer all around” attitude.

I resolved to retry the upgrade after reading about how the Cinnamon desktop environment of Linux Mint fame had made its way into the official Fedora repositories. See, my wife’s “don’t change stuff” prime directive had clashed pretty directly with the GNOME 3 “hey, let’s change everything” design philosophy, and the Cinnamon desktop environment was supposed to be a better fit for users still pining for the familiarity of GNOME 2.

I started out as one of those piners, but after a few months using GNOME Shell, I got used to it, but I still install the GNOME Tweak Tool right off the bat in order to roll back some of the more annoying user interface defaults in GNOME Shell. Really, I don’t understand why it wasn’t possible for the GNOME 3 designers to make the shift from the v2 to the v3 user interface a bit more welcoming for its existing user base. What’s so bad about keeping a panel around, or allowing files and folders to show up on the desktop, or having minimize and maximize buttons in your window decorations?

I’m sort of going off track from the upgrade tale here, but to me, the Cinnamon desktop environment points pretty clearly to a direction that the GNOME designers could have taken–the fancy OS X-style expose modes are still around in Cinnamon, but so are the familiar panels with app menus and window lists. Also, Cinnamon includes plenty of options for configuring basic settings, like fonts. I still can’t believe that you have to download a separate tool (the aforementioned gnome-tweak-tool) to change the fonts you use in GNOME 3.

On Fedora 16, my wife’s login was set use a Compiz-based GNOME 2-workalike session by default, and my login was set to use the default GNOME Shell option. Unfortunately, something about this combination broke fast user switching, so my login didn’t end up getting much use. Post-upgrade, my wife’s Cinnamon session and my GNOME Shell session get along much better–we’re able to swap between our login sessions as expected.

For the upgrade itself, I opted to upgrade from a copy of the F17 DVD that I’d written onto a USB key. The upgrade ran through without issue, putting in place some 1200+ new packages. What was weird, though, was that once I booted into my newly-upgraded system, I found tons of F16 packages still in place. I ran a “yum distribution-synchronization” to get up to date, and again, some 1200+ packages required updating. I’m not sure what happened there, but between that and my experience with preupgrade, I’m reminding myself to chip in some QA love on upgrade matters as the F17-to-F18 switchover approaches.

My wife’s spent a few hours now on her newly Cinnamonized desktop, and her experiences have been delightfully uneventful. Low-impact system administration FTW!

preupgrade gone wrong

Having reached a good break point in my Gluster/Openstack/Fedora tests, I thought I’d preupgrade the F16 VM I’ve been using for ovirt engine to F17, en route to the oVirt 3.1 beta.

That didn’t go so well. During the post-preupgrade part (uh, the upgrade), the installer balked at upgrading the jboss-as package that shipped with oVirt 3.0. Afterward, the VM wouldn’t boot correctly.

Fortunately, I was prepared for failure, detaching my iso domain in advance, and shuttling the templates and VMs I wanted to keep to the export domain, which I also detached.

Fedora 17, OpenStack Essex & Gluster 3.3: All Smushed Together

Within the past couple weeks, Fedora and Gluster rolled out new versions, packed with too many features to discuss in a single blog post. However, a couple of the stand-out updates in each release overlap neatly enough to tackle them together–namely, the inclusion of OpenStack Essex in Fedora 17 and support for using Gluster 3.3 as a storage backend for OpenStack.

I’ve tested OpenStack a couple of times in the past, and I’m happy to report that while the project remains a fairly complicated assemblage of components, the community around OpenStack has a done a good job documenting the process of setting up a basic test rig. Going head to head with Amazon Web Services, even with the confines of one’s own organization, won’t be a walk in the park, but it’s fairly easy to get OpenStack up an running in a form suitable for further learning and experimentation.

OpenStack on Fedora 17

The getting started with OpenStack on Fedora 17 howto that I followed for my latest test involves quite a bit of command line cut and paste, but it didn’t take long for me to go from a minimal install Fedora 17 virtual machine to a single node OpenStack installation, complete with compute, image hosting, authentication, and dashboard services–everything I needed to launch VMs, register images, and manage everything from the comfort of a web UI.

A couple of notes, I did everything on this minimal-install Fedora machine as root–since this is a soon-to-be blown-away test VM, I didn’t bother to create additional users. You may need to sprinkle in some sudos if you’re running as non-root. Also, I hit at least one issue with SELinux (related to glance) during my tests. I never turn off SELinux by default, but once I hit an error on a test box, I throw it into permissive mode.

Also, I elected to run the whole show (the openstack part of it, at least) within a single virtual machine running on my home oVirt installation, so the performance of my guest instances was very slow, but everything worked well enough for me to take OpenStack for a spin, and get to fiddling with trickier OpenStack topics, such as…

The one OpenStack element that the Fedora howto touches on only briefly is OpenStack Swift, the object storage system intended to replace Amazon’s S3. Here’s what the howto has to say about Swift:

These are the minimal steps required to setup a swift installation with keystone authentication, this wouldn’t be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notably it doesn’t include replication, multiple zones and load balancing.

 

(Configure swift with keystone)

What an ideal segue for Gluster 3.3, a storage software project with replication and load balancing as its stock in trade. The Gluster portion of my tests was quite a bit trickier than the OpenStack on Fedora part had been, but I learned a lot about Gluster and OpenStack along the way.

Building Gluster 3.3 Packages

First off, Gluster 3.3 shipped a bit after Fedora 17, and the version of Gluster available in the Fedora software repositories is still at 3.2. What’s more, the 3.3 packages offered by the Gluster project target Fedora 16, as well. The Fedora folder on the Gluster download server doesn’t include any source rpms, but I found a spec file for building Fedora rpms in the Gluster source tarball on the download server.

On my Fedora 17 notebook, I fetched the build dependencies for Gluster 3.2 using the command yum-builddep from the yum-utils package:

sudo yum-builddep glusterfs

I grabbed the file glusterfs.spec from the glusterfs-3.3.0.tar.gz tarball, dropped it in ~/rpmbuild/SPECS, and put the tarball into ~/rpmbuild/SOURCES. If you don’t have rpm-build installed on your Fedora machine, you’ll need to do that, as well.

Next, I built my Gluster 3.3 packages for F17:

rpmbuild -bb ~/rpmbuild/SPECS/glusterfs.spec

Then, I copied the packages over to my OpenStack test machine and updated the glusterfs and glusterfs-fuse packages that had been pulled in as dependencies during my OpenStack on F17 install:

scp ~/rpmbuild/RPMS/x86_64/glusterfs-* root@openstackF17:/root
ssh root@openstackF17 yum install -y ./glusterfs-3.3.0-1.fc17.x86_64.rpm glusterfs-fuse-3.3.0-1.fc17.x86_64.rpm

Gluster+OpenStack: The Easy Way

As described on the Connecting with OpenStack Resource Page on the Gluster wiki, there are two ways of using Gluster with OpenStack. The first is super simple, and amounts to locating the images for your running OpenStack instances on Gluster by simply mounting a Gluster volume at the spot where OpenStack expects to place these images. On the resource page, there’s a PDF titled OpenStack VM Storage Guide that steps through the process of creating a four node distributed-replicated volume and mounting it in the right spot. Easy.

I did this with my test OpenStack setup, and it worked as advertised. I kicked off a yum update operation in one of my OpenStack instances, and then ungracefully shutdown (pulled the virtual plug on) the gluster VM node where the instance was calling home. I watched as the yum update process paused for a short time before continuing happily enough on one of the other Gluster nodes I’d configured.

Where things got quite a bit trickier was with the second OpenStack-Gluster integration option, that for Unified Object and File Storage. Gluster’s UFO is based on a slightly modified version of OpenStack Swift, where Gluster brings the storage, and users are able to access files and content either as objects, through Swift’s REST interface, or as regular files, through Gluster’s FUSE or NFS mounts.

Building Gluster UFO Packages

Again, I started by building some packages. The Gluster download site offers UFO (aka gluster-swift) packages for enterprise Linux 6 (RHEL and its relabeled children). There’s a source tarball, but unlike the main glusterfs tarball, the gluster-swift tarball doesn’t include a spec file for building rpms. I located spec files for gluster-swift and gluster-swift-plugin at Gluster’s github site, but these spec files referenced a handful of patches that weren’t in the git repository, so I wasn’t able to build them.

After Googling a while for the missing patches, I found source rpms for gluster-swift and gluster-swift-plugin in a public source repository for Red Hat Storage 2.0. Both of these packages are a hair older than the ones in the Gluster download location: gluster-swfit-1.4.8-3 vs 1.4.8-4 and gluster-swift-plugin-1.0-1 vs. 1.0-2, but I forged ahead with these.

I had to tweak the SPEC files slightly, changing references to the python2.6 in el6 to the python2.7 that ships with Fedora 17, but I managed to build both of them without much hassle, before copying them over to my openstack test machine and installing them:

rpmbuild -bb ~/rpmbuild/SPECS/gluster-swift.spec
rpmbuild -bb ~/rpmbuild/SPECS/gluster-swift-plugin.spec
scp ~/rpmbuild/RPMS/noarch/gluster-swift* root@openstackF17:/root
ssh root@openstackF17 yum install -y ./gluster-swift-*

Gluster-Swift + OpenStack

Over on our openstackF17 machine, the gluster-swift package has placed a bunch of configuration files in /etc/swift. We’re going to leave most of these configurations in place, but we need to make a few modifications, starting with fs.conf:

vi /etc/swift/fs.conf

I’m using the four VM gluster cluster described in the OpenStack VM Storage Guide I mentioned above, which is remote from my openstack server, so I have to change “mount_ip” to the ip of one of my gluster servers, and change “remote_cluster” to yes. If my gluster volume, or part of it, was local, I could have left these values alone.

The other thing required to make the remote gluster cluster bit work is enabling passwordless ssh login between my openstackF17 machine and the gluster server I pointed to in fs.conf:

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub root@gluster1

More config file editing. Next up, proxy-server.conf. In order to get gluster-swift working with OpenStack’s Keystone authentication service, we’re going to grab some of the configuration info from the Fedora 17 OpenStack guide:

vi /etc/swift/proxy-server.conf

Change the “pipeline” line under [pipeline:main], adding “authtoken keystone” to the line, and removing “tempauth”:

pipeline = healthcheck cache authtoken keystone proxy-server

And then add these sections to correspond with our added elements. As to the “are these needed” comment question, that comes from the howto in the Fedora wiki, and I don’t know the answer, so I left it in:

[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_port = 35357
auth_host = 127.0.0.1
auth_protocol = http
admin_token = ADMINTOKEN
# ??? Are these needed?
service_port = 5000
service_host = 127.0.0.1
service_protocol = http
auth_token = ADMINTOKEN

If you followed along with the Fedora 17 OpenStack howto, you’ll have a file (keystonerc) in your home directory that sets your OpenStack environment variables. Let’s make sure our variables are set correctly:

. ~/keystonerc

Next, we run these commands to replace some placeholder values in our proxy-server.conf file:

openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN

Now we add the Swift service and endpoint to Keystone:

SERVICEID=$(keystone service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
echo $SERVICEID # just making sure we got a SERVICEID
keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s"

Gluster-swift will be looking for Gluster volumes that correspond to Swift account names. We need to figure out what names we need, and create Gluster volumes with those names. We ask Keystone about our account names:

keystone tenant-list

In my setup, this turns up four accounts:

+----------------------------------+--------------------+---------+
|                id                |        name        | enabled |
+----------------------------------+--------------------+---------+
| 18571133bf9b4236be0ad45f2ccff135 | invisible_to_admin | True    |
| 1918b675fa1f4b7f87c2bb3688f6f2f7 | admin              | True    |
| 42c41f15e6a24fa5b105e89b60af18fb | demo               | True    |
| decd4d68f50345eeb2eae090e2d32dcb | service            | True    |
+----------------------------------+--------------------+---------+

So far, I’ve needed volumes for the admin and demo accounts. You’ll need to name your Gluster volumes after the value in the “id” column. Following the four node example in the OpenStack VM Storage Guide, the command (which you must run from on of your gluster nodes) will look like this, substituting your own Gluster node IPs, and your volume name values from keystone tenant-list:

gluster volume create 42c41f15e6a24fa5b105e89b60af18fb replica 2 10.1.1.11:/vmstore 10.1.1.12:/vmstore 10.1.1.13:/vmstore 10.1.1.14:/vmstore

Run the command again so you have volumes that correspond to both the admin and demo tenant ids.

Each Gluster volume needs its own mount point. You don’t have to create your mount points manually on each server. And again, the Gluster volume doesn’t have to live on a remote cluster. Any properly named Gluster volume on a server that gluster-swift knows about (from fs.conf, which we modded earlier) and can access passwordlessly (red spell check underline be damned) ought to work.

All right, almost done. Start or restart memcached, and start gluster-swift:

service memcached restart
swift-init main start

Now, we should be able to test gluster-swift:

swift list

If all is well, gluster-swift should try to mount the admin volume (the keystonerc file is telling swift to use the admin account), and satisfying hard drive activity gurgling sounds should ensue. If you run the command “mount” you should see that you have a Gluster volume mounted at the mount point “/mnt/gluster-object/AUTH_YOURADMINVOLNAME”. Like so:

gluster1:1918b675fa1f4b7f87c2bb3688f6f2f7 on /mnt/gluster-object/AUTH_1918b675fa1f4b7f87c2bb3688f6f2f7 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

You can test uploading to the volume from the command line:

swift upload container /path/to/file

You ought to be able to ssh in to one of your gluster nodes, navigate to the mount point that corresponds to your admin account volume, and see the file you just uploaded.

For a more GUI-ful experience, we can check out our snazzy gluster-swift store from the OpenStack dashboard (you’ll have installed this if you followed the OpenStack Fedora 17 howto). Make sure your firewall is down or you have port 80 open, and restart your web server for good measure:

service httpd restart

Visit the dashboard at http://YOUROPENSTACKSERVERIP/dashboard, and log in with admin and (assuming you retained the password default from the howto) verybadpass. In the left nav column, click the “Project” tab. The default project is “demo” (which is why we had to create a demo volume). In the left nav column, under “Object Store,” click “Containers,” and create, delete, upload to, download from, etc. at will. In the background, just as with the “swift list” command, gluster-swift should be reacting to the dashboard’s requests by mounting your Gluster volume.

UFO in Action

For Further Study: Glance on Gluster-Swift

By default, OpenStack’s image-hosting service, Glance, stores its images in a local directory, but it’s possible to use Swift as a back-end for that image storage, by the backend listed in /etc/glance/glance-api.conf from “file” to “swift” and by correctly hooking up the authentication details there. I’ve yet to get this working, though.

In this OpenStack on Ubuntu howto, the author notes that a glance package from a particular PPA is required to make this work, due to some issue in the latest (as of 5/28/12) glance package from the official repos. I took a peek at the patches included in this substitute package, and couldn’t immediately tell what, if anything, might be missing from Fedora’s glance package.

If you’re still with me, and you’re interested in setting up all or part of this yourself, don’t hesitate to ask me questions–I puzzled over this for a week or so, and if I can save you some time, that’ll make my toiling more worthwhile to me. Fire away in the comments below, or hit me up on IRC. I’m jbrooks on freenode IRC, and #gluster is one of the channels where you can find me.

reinstall

I reinstalled Fedora 17 on my main work machine yesterday — I was having weird issues with gnome-boxes and virt-manager, and thought my problems might have stemmed from the weird libvirt machinations I undertook to get oVirt running on my laptop w/o disabling NetworkManager.

I always keep my home directory in a separate partition to allow for easy clean installs w/o losing my data, but this time around I copied my home directory off to a separate drive to start completely fresh — I’ll ferry needed files and folders back as needed.

One thing I had to go recreate on my new install was a set of tweaks for providing decent font rendering on Fedora. Without these steps, fonts render pretty poorly. I follow the steps in this blog post to mimic Ubuntu’s font rendering options, and then create the .fonts.conf file described here to cajole Google Chrome into obeying the rules laid out in the former step.

I hereby remind myself to look into exactly why it is that the patent fear fairies that prompt Fedora to ship with a crappy-looking font config don’t equally menace Ubuntu. I realize that my employer, with its relatively deeper pockets, presents a more attractive lawsuit target compared to Ubuntu’s sponsor, but if Fedora were to shun every piece of potentially patent encumbered software, there’d be no Fedora at all.

Where to draw the line?

Navel Gazery, Ubuntu, and Fedora

Welcome to the first non-lorem ipsum post on this, my non-work blog, where many of the things I might write about on my work blog, but don’t, because they seem way too navel-gazy, I may end up writing about here.

One such thing: the ongoing (sort of) battle between different Linux distributions on my work notebook. I used to jump around a lot between different desktop OSes: Windows 95, Windows 98, Windows 2000, Windows XP, BeOS, SuSE Linux, Red Hat Linux, Fedora, Ubuntu, OpenSUSE, Gentoo, Fedora, Ubuntu, Ubuntu, Ubuntu, Ubuntu… Continue reading “Navel Gazery, Ubuntu, and Fedora”