Fedora 17, OpenStack Essex & Gluster 3.3: All Smushed Together

Within the past couple weeks, Fedora and Gluster rolled out new versions, packed with too many features to discuss in a single blog post. However, a couple of the stand-out updates in each release overlap neatly enough to tackle them together–namely, the inclusion of OpenStack Essex in Fedora 17 and support for using Gluster 3.3 as a storage backend for OpenStack.

I’ve tested OpenStack a couple of times in the past, and I’m happy to report that while the project remains a fairly complicated assemblage of components, the community around OpenStack has a done a good job documenting the process of setting up a basic test rig. Going head to head with Amazon Web Services, even with the confines of one’s own organization, won’t be a walk in the park, but it’s fairly easy to get OpenStack up an running in a form suitable for further learning and experimentation.

OpenStack on Fedora 17

The getting started with OpenStack on Fedora 17 howto that I followed for my latest test involves quite a bit of command line cut and paste, but it didn’t take long for me to go from a minimal install Fedora 17 virtual machine to a single node OpenStack installation, complete with compute, image hosting, authentication, and dashboard services–everything I needed to launch VMs, register images, and manage everything from the comfort of a web UI.

A couple of notes, I did everything on this minimal-install Fedora machine as root–since this is a soon-to-be blown-away test VM, I didn’t bother to create additional users. You may need to sprinkle in some sudos if you’re running as non-root. Also, I hit at least one issue with SELinux (related to glance) during my tests. I never turn off SELinux by default, but once I hit an error on a test box, I throw it into permissive mode.

Also, I elected to run the whole show (the openstack part of it, at least) within a single virtual machine running on my home oVirt installation, so the performance of my guest instances was very slow, but everything worked well enough for me to take OpenStack for a spin, and get to fiddling with trickier OpenStack topics, such as…

The one OpenStack element that the Fedora howto touches on only briefly is OpenStack Swift, the object storage system intended to replace Amazon’s S3. Here’s what the howto has to say about Swift:

These are the minimal steps required to setup a swift installation with keystone authentication, this wouldn’t be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notably it doesn’t include replication, multiple zones and load balancing.

 

(Configure swift with keystone)

What an ideal segue for Gluster 3.3, a storage software project with replication and load balancing as its stock in trade. The Gluster portion of my tests was quite a bit trickier than the OpenStack on Fedora part had been, but I learned a lot about Gluster and OpenStack along the way.

Building Gluster 3.3 Packages

First off, Gluster 3.3 shipped a bit after Fedora 17, and the version of Gluster available in the Fedora software repositories is still at 3.2. What’s more, the 3.3 packages offered by the Gluster project target Fedora 16, as well. The Fedora folder on the Gluster download server doesn’t include any source rpms, but I found a spec file for building Fedora rpms in the Gluster source tarball on the download server.

On my Fedora 17 notebook, I fetched the build dependencies for Gluster 3.2 using the command yum-builddep from the yum-utils package:

sudo yum-builddep glusterfs

I grabbed the file glusterfs.spec from the glusterfs-3.3.0.tar.gz tarball, dropped it in ~/rpmbuild/SPECS, and put the tarball into ~/rpmbuild/SOURCES. If you don’t have rpm-build installed on your Fedora machine, you’ll need to do that, as well.

Next, I built my Gluster 3.3 packages for F17:

rpmbuild -bb ~/rpmbuild/SPECS/glusterfs.spec

Then, I copied the packages over to my OpenStack test machine and updated the glusterfs and glusterfs-fuse packages that had been pulled in as dependencies during my OpenStack on F17 install:

scp ~/rpmbuild/RPMS/x86_64/glusterfs-* root@openstackF17:/root
ssh root@openstackF17 yum install -y ./glusterfs-3.3.0-1.fc17.x86_64.rpm glusterfs-fuse-3.3.0-1.fc17.x86_64.rpm

Gluster+OpenStack: The Easy Way

As described on the Connecting with OpenStack Resource Page on the Gluster wiki, there are two ways of using Gluster with OpenStack. The first is super simple, and amounts to locating the images for your running OpenStack instances on Gluster by simply mounting a Gluster volume at the spot where OpenStack expects to place these images. On the resource page, there’s a PDF titled OpenStack VM Storage Guide that steps through the process of creating a four node distributed-replicated volume and mounting it in the right spot. Easy.

I did this with my test OpenStack setup, and it worked as advertised. I kicked off a yum update operation in one of my OpenStack instances, and then ungracefully shutdown (pulled the virtual plug on) the gluster VM node where the instance was calling home. I watched as the yum update process paused for a short time before continuing happily enough on one of the other Gluster nodes I’d configured.

Where things got quite a bit trickier was with the second OpenStack-Gluster integration option, that for Unified Object and File Storage. Gluster’s UFO is based on a slightly modified version of OpenStack Swift, where Gluster brings the storage, and users are able to access files and content either as objects, through Swift’s REST interface, or as regular files, through Gluster’s FUSE or NFS mounts.

Building Gluster UFO Packages

Again, I started by building some packages. The Gluster download site offers UFO (aka gluster-swift) packages for enterprise Linux 6 (RHEL and its relabeled children). There’s a source tarball, but unlike the main glusterfs tarball, the gluster-swift tarball doesn’t include a spec file for building rpms. I located spec files for gluster-swift and gluster-swift-plugin at Gluster’s github site, but these spec files referenced a handful of patches that weren’t in the git repository, so I wasn’t able to build them.

After Googling a while for the missing patches, I found source rpms for gluster-swift and gluster-swift-plugin in a public source repository for Red Hat Storage 2.0. Both of these packages are a hair older than the ones in the Gluster download location: gluster-swfit-1.4.8-3 vs 1.4.8-4 and gluster-swift-plugin-1.0-1 vs. 1.0-2, but I forged ahead with these.

I had to tweak the SPEC files slightly, changing references to the python2.6 in el6 to the python2.7 that ships with Fedora 17, but I managed to build both of them without much hassle, before copying them over to my openstack test machine and installing them:

rpmbuild -bb ~/rpmbuild/SPECS/gluster-swift.spec
rpmbuild -bb ~/rpmbuild/SPECS/gluster-swift-plugin.spec
scp ~/rpmbuild/RPMS/noarch/gluster-swift* root@openstackF17:/root
ssh root@openstackF17 yum install -y ./gluster-swift-*

Gluster-Swift + OpenStack

Over on our openstackF17 machine, the gluster-swift package has placed a bunch of configuration files in /etc/swift. We’re going to leave most of these configurations in place, but we need to make a few modifications, starting with fs.conf:

vi /etc/swift/fs.conf

I’m using the four VM gluster cluster described in the OpenStack VM Storage Guide I mentioned above, which is remote from my openstack server, so I have to change “mount_ip” to the ip of one of my gluster servers, and change “remote_cluster” to yes. If my gluster volume, or part of it, was local, I could have left these values alone.

The other thing required to make the remote gluster cluster bit work is enabling passwordless ssh login between my openstackF17 machine and the gluster server I pointed to in fs.conf:

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub root@gluster1

More config file editing. Next up, proxy-server.conf. In order to get gluster-swift working with OpenStack’s Keystone authentication service, we’re going to grab some of the configuration info from the Fedora 17 OpenStack guide:

vi /etc/swift/proxy-server.conf

Change the “pipeline” line under [pipeline:main], adding “authtoken keystone” to the line, and removing “tempauth”:

pipeline = healthcheck cache authtoken keystone proxy-server

And then add these sections to correspond with our added elements. As to the “are these needed” comment question, that comes from the howto in the Fedora wiki, and I don’t know the answer, so I left it in:

[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_port = 35357
auth_host = 127.0.0.1
auth_protocol = http
admin_token = ADMINTOKEN
# ??? Are these needed?
service_port = 5000
service_host = 127.0.0.1
service_protocol = http
auth_token = ADMINTOKEN

If you followed along with the Fedora 17 OpenStack howto, you’ll have a file (keystonerc) in your home directory that sets your OpenStack environment variables. Let’s make sure our variables are set correctly:

. ~/keystonerc

Next, we run these commands to replace some placeholder values in our proxy-server.conf file:

openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN

Now we add the Swift service and endpoint to Keystone:

SERVICEID=$(keystone service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
echo $SERVICEID # just making sure we got a SERVICEID
keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_$(tenant_id)s"

Gluster-swift will be looking for Gluster volumes that correspond to Swift account names. We need to figure out what names we need, and create Gluster volumes with those names. We ask Keystone about our account names:

keystone tenant-list

In my setup, this turns up four accounts:

+----------------------------------+--------------------+---------+
|                id                |        name        | enabled |
+----------------------------------+--------------------+---------+
| 18571133bf9b4236be0ad45f2ccff135 | invisible_to_admin | True    |
| 1918b675fa1f4b7f87c2bb3688f6f2f7 | admin              | True    |
| 42c41f15e6a24fa5b105e89b60af18fb | demo               | True    |
| decd4d68f50345eeb2eae090e2d32dcb | service            | True    |
+----------------------------------+--------------------+---------+

So far, I’ve needed volumes for the admin and demo accounts. You’ll need to name your Gluster volumes after the value in the “id” column. Following the four node example in the OpenStack VM Storage Guide, the command (which you must run from on of your gluster nodes) will look like this, substituting your own Gluster node IPs, and your volume name values from keystone tenant-list:

gluster volume create 42c41f15e6a24fa5b105e89b60af18fb replica 2 10.1.1.11:/vmstore 10.1.1.12:/vmstore 10.1.1.13:/vmstore 10.1.1.14:/vmstore

Run the command again so you have volumes that correspond to both the admin and demo tenant ids.

Each Gluster volume needs its own mount point. You don’t have to create your mount points manually on each server. And again, the Gluster volume doesn’t have to live on a remote cluster. Any properly named Gluster volume on a server that gluster-swift knows about (from fs.conf, which we modded earlier) and can access passwordlessly (red spell check underline be damned) ought to work.

All right, almost done. Start or restart memcached, and start gluster-swift:

service memcached restart
swift-init main start

Now, we should be able to test gluster-swift:

swift list

If all is well, gluster-swift should try to mount the admin volume (the keystonerc file is telling swift to use the admin account), and satisfying hard drive activity gurgling sounds should ensue. If you run the command “mount” you should see that you have a Gluster volume mounted at the mount point “/mnt/gluster-object/AUTH_YOURADMINVOLNAME”. Like so:

gluster1:1918b675fa1f4b7f87c2bb3688f6f2f7 on /mnt/gluster-object/AUTH_1918b675fa1f4b7f87c2bb3688f6f2f7 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

You can test uploading to the volume from the command line:

swift upload container /path/to/file

You ought to be able to ssh in to one of your gluster nodes, navigate to the mount point that corresponds to your admin account volume, and see the file you just uploaded.

For a more GUI-ful experience, we can check out our snazzy gluster-swift store from the OpenStack dashboard (you’ll have installed this if you followed the OpenStack Fedora 17 howto). Make sure your firewall is down or you have port 80 open, and restart your web server for good measure:

service httpd restart

Visit the dashboard at http://YOUROPENSTACKSERVERIP/dashboard, and log in with admin and (assuming you retained the password default from the howto) verybadpass. In the left nav column, click the “Project” tab. The default project is “demo” (which is why we had to create a demo volume). In the left nav column, under “Object Store,” click “Containers,” and create, delete, upload to, download from, etc. at will. In the background, just as with the “swift list” command, gluster-swift should be reacting to the dashboard’s requests by mounting your Gluster volume.

UFO in Action

For Further Study: Glance on Gluster-Swift

By default, OpenStack’s image-hosting service, Glance, stores its images in a local directory, but it’s possible to use Swift as a back-end for that image storage, by the backend listed in /etc/glance/glance-api.conf from “file” to “swift” and by correctly hooking up the authentication details there. I’ve yet to get this working, though.

In this OpenStack on Ubuntu howto, the author notes that a glance package from a particular PPA is required to make this work, due to some issue in the latest (as of 5/28/12) glance package from the official repos. I took a peek at the patches included in this substitute package, and couldn’t immediately tell what, if anything, might be missing from Fedora’s glance package.

If you’re still with me, and you’re interested in setting up all or part of this yourself, don’t hesitate to ask me questions–I puzzled over this for a week or so, and if I can save you some time, that’ll make my toiling more worthwhile to me. Fire away in the comments below, or hit me up on IRC. I’m jbrooks on freenode IRC, and #gluster is one of the channels where you can find me.

reinstall

I reinstalled Fedora 17 on my main work machine yesterday — I was having weird issues with gnome-boxes and virt-manager, and thought my problems might have stemmed from the weird libvirt machinations I undertook to get oVirt running on my laptop w/o disabling NetworkManager.

I always keep my home directory in a separate partition to allow for easy clean installs w/o losing my data, but this time around I copied my home directory off to a separate drive to start completely fresh — I’ll ferry needed files and folders back as needed.

One thing I had to go recreate on my new install was a set of tweaks for providing decent font rendering on Fedora. Without these steps, fonts render pretty poorly. I follow the steps in this blog post to mimic Ubuntu’s font rendering options, and then create the .fonts.conf file described here to cajole Google Chrome into obeying the rules laid out in the former step.

I hereby remind myself to look into exactly why it is that the patent fear fairies that prompt Fedora to ship with a crappy-looking font config don’t equally menace Ubuntu. I realize that my employer, with its relatively deeper pockets, presents a more attractive lawsuit target compared to Ubuntu’s sponsor, but if Fedora were to shun every piece of potentially patent encumbered software, there’d be no Fedora at all.

Where to draw the line?

stuck, volume 1

So, I’m working my way through the OpenShift Origin BYO PaaS wiki page, but I’m stuck right now near the finish line.

On Saturday, I was cranking through the howto, highlighting and middle-click pasting my way to BYOP nirvana, until I hit an authentication issue when it was time to create a domain on my newly-minted PaaS.

After taking a break for a couple days, I realized that I’d simply forgotten to point my rhc client at the right host — rhc defaults to openshift.redhat.com, and if there’s an account on the Red Hat hosted server with the user name “admin” I can confirm that that user’s password is not “admin” as well.

Cinch. I’d be up and running in no time. Except I hit another issue — my host complained about: “Permission denied – /var/www/stickshift/broker/Gemfile.lock” and there was no such file on my host. With I bit of help from #openshift-dev, I got past the error by running “bundle install” in the broker directory and then chown-ing Gemfile.lock apache:apache.

But I hit another error message: “Failed to authenticate user ‘stickshift’ on db ‘stickshift_broker_dev’.” Word in #openshift-dev is that this is a mongo issue that someone else following the BYO instructions recently encountered as well.

I’ll circle back to this, but for now I’m going to proceed using the OpenShift Origin image I installed from the LiveCD. I’m copying that image from my notebook, where I’ve been running it on KVM using virt-manager, to my newly-assembled oVirt rig (which I mean to blog about soon) using the handy virt-v2c utility. [dang, stuck there, too]

More to come…

[UPDATE 5/20/12]

Another weekend, another shot at the BYOP. I restored my host back to a “fresh install” snapshot, followed all the directions, and am stuck again at the end of the directions. Getting the error: /usr/lib/ruby/gems/1.8/gems/uplift-bind-plugin-0.8.3/lib/uplift-bind-plugin/uplift/bind_plugin.rb:8: uninitialized constant StickShift::DnsService (NameError).

What I Look For In An Open Source Project Web Site

Perusing new open source software projects has long been both a job requirement and a pastime for me. Over the past decade plus so I’ve come across a ton of open source project web sites, running the gamut from good to bad — with a healthy contingent of ugly in the mix.

Of course, it takes more than a sweet web site to make an open source project worth writing about or contributing to — a project that offers up a lousy solution to a real problem, or, worse, seeks to answer an unasked question has more fundamental issues to tackle. On the other hand, you could have a project ideal for scratching, elegantly, some global itch, but if the project does a poor job conveying its whys and hows, it could end up overlooked.

The basic information I look for in an open source project home page, either right up front or no more than one clearly-labeled link deep, fits (or, at least, can be made to fit) into the classic Five W’s (and an H) model:

What is the project about / what does it do / what is it for?

Pretty straightforward. If see a tweet from an open source savvy pal that says something low-context like, “really impressed with what they’ve got going on at $LINK,” and you click, it shouldn’t take long, at all, for you understand what the project is about.

Initially, I’d left mention of license till the bottom of this post, but for an open source project, that’s another “what” I like to see answered right up top: what’s the license?

Why should I care / use / contribute?

What is it and why should I care sort of go together: a first impressions one-two punch. OK, so I see that this project is (say) a command line to do manager. What makes this one better or different than the million other to do apps out there? A well-written “what is it” blurb ought also to answer the “why should I care” query.

Where do I get it (packages / binaries / source)?

All right, the project does X and does it in a way that seems sufficiently interesting. Where do I get it? If the project is packaged up for particular operating systems, which ones and where do I find those packages? Where’s the source?

How do I use it / get involved?

When I’ve gotten interested enough in a project to learn about what it’s supposed to do and where I can go to get it, I like to take it for a short spin. The most effective projects make it easy for users to get their feet wet by offering up some short getting started instructions.

Who’s working on the project, and how can I connect w/ them?

Over the past few years, when I come across a project that seems interesting, but perhaps not yet ready for prime time (or just to be useful to me), I scan the project site for the Twitter feeds or the blogs of the project’s key developers to follow or add to my RSS feed reader. Sometimes these links lead me to other interesting projects, and future tweets or blog posts serve to jog my memory about projects I’ve looked into but didn’t end up digging into in earnest.

When was the last release, repo activity, project update (proof of life / sell by date)?

The Internet is littered with the floating husks of dead open source projects, and depending on the web site, one’s slow-moving-but-not-dead project can look exactly like a project that’s gone, never to return. Abandoned projects tend not to be very interesting to me, but that doesn’t stop my friendly neighborhood search engine from suggesting them. When I’m checking out a project, I stay on the lookout for things like commit dates in source repositories, for release dates on binaries, and for mailing list activity. The best projects don’t make me dig for long.

Those are the basics I look for in an open source project web site, though there are other, specific details I like to see right up front, like the license a project uses. Is it something familiar, like GPL or Apache, or is it some wacky vanity spin of the MPL that requires careful reading before you get an idea of what is and isn’t allowed?

What do you look for in an open source project web site?

Building My Own PaaS with OpenShift Origin

I’m working through the OpenShift Origin Build Your Own PaaS howto, which says:

Several of the cartridge packages have additional third party dependencies. These have not yet been resolved for the open source environment. Work is actively progressing.

On my Fedora 16 host, these are the cartridges that wouldn’t install for missing dependencies:

  • cartridge-jbossas-7.noarch : Provides JBossAS7 support
  • cartridge-jenkins-1.4.noarch : Provides jenkins-1.4 support

These are the ones that would install:

  • cartridge-10gen-mms-agent-0.1.noarch : Embedded 10gen MMS agent for performance monitoring of MondoDB
  • cartridge-cron-1.4.noarch : Embedded cron support for express
  • cartridge-diy-0.1.noarch : Provides diy support
  • cartridge-jenkins-client-1.4.noarch : Embedded jenkins client support for express
  • cartridge-mongodb-2.0.noarch : Embedded mongodb support for OpenShift
  • cartridge-mysql-5.1.noarch : Provides embedded mysql support
  • cartridge-nodejs-0.6.noarch : Provides Node-0.6 support
  • cartridge-perl-5.10.noarch : Provides mod_perl support
  • cartridge-php-5.3.noarch : Provides php-5.3 support
  • cartridge-phpmyadmin-3.4.noarch : Embedded phpMyAdmin support for express
  • cartridge-python-3.2.noarch : Provides python-wsgi-3.2 support
  • cartridge-ruby-1.1.noarch : Provides ruby rack support running on Phusion Passenger

More to come.

Run OpenShift Origin from LiveCD, and Make it Stick

The OpenShift Origin LiveCD will have you up and running with the code that backs Red Hat’s PaaS in a flash, but installing the LiveCD to your hard drive requires a few workaround steps.

[UPDATE: Check out wiki-fied, updated version of this howto at the OpenShift Origin community site.]

Today, Red Hat delivered on its pledge to open the source code and development process behind its Platform as a Service offering, OpenShift. To help avoid confusion between the Red Hat-hosted service and the open source project and code base, the project is named OpenShift Origin.

The OpenShift Origin source code is available at github.com/openshift, and software packages for Fedora 16 and RHEL 6 are available for download and installation, as well. At this point, though, the fastest way to get and and running with OpenShift Origin is to download this LiveCD image and fire it up on a VM or spare machine.

The LiveCD will boot you straight into a graphical desktop session, based on Fedora, from which you can create a domain and some sample applications. It couldn’t be much easier to use, but as with most LiveCDs, the environment goes away once you reboot. Also, the LiveCD sets you up to interact with any applications you install through the web browser and terminal window in the LiveCD environment. I prefer to use the browser in my regular desktop environment.

Fedora LiveCDs come with a nifty “install to hard drive” option, but in order to install the OpenShift Origin LiveCD to a drive (whether physical or virtual) a couple workaround steps are currently required:

  1. Download LiveCD and boot a VM with it. The project wiki includes instructions for setting up a VM with VirtualBox. I used KVM and virt-manager on my Fedora 17 desktop.
  2. In the terminal window that pops up once the LiveCD has finished booting, type “su” to become the superuser.
  3. The OpenShift Origin environment requires that NetworkManager be disabled, but the system installer requires NetworkManager. Enable NetworkManager by adding the line “NM_CONTROLLED=yes” (no quotes) to your network adapter’s config file. Assuming your network adapter is named eth0, this command ought to do the trick: “echo NM_CONTROLLED=yes >> /etc/sysconfig/network-scripts/ifcfg-eth0”
  4. Restart the network service: “service network restart”
  5. Start the NetworkManager service: “service NetworkManager start”
  6. Start the installer: “liveinst”
  7. Go through text-based install steps, finally rebooting your VM, and logging in as root.
  8. I’m not positive which firewall ports must be open, so for now I’m just disabling the firewall with: “system-configure-firewall-tui”
  9. Run “ifconfig” to figure out the IP address of your VM, and head out to your regular desktop environment to carry out a bit more configuration, and to start using your mini me PaaS installation.
  10. If you’re interested enough in OpenShift to be running OpenShift Origin on one of your own machines, I’m assuming that you’ve already tried out the full-sized, Red Hat-hosted OpenShift service. If so, you’ll want to create a new config file to use with your locally-hosted OpenShift instance, otherwise the rhc client will default to talking to the OpenShift servers off in the clouds.  I created a file called express.conf containing two lines: “default_rhlogin=admin” and “libra_server=YOUR_VM_ADDRESS”.
  11. Next, I created a domain on my OpenShift Origin instance, making sure to append the path to my alternate config file: “rhc domain create -n origin –config=/home/jason/Desktop/express.conf”. When prompted for a password, use “admin”.
  12. Now, you’re ready to install an application. I’m partial to WordPress as a demo app (my blog is powered by WordPress+OpenShift) but if you’d like to try a different app, here’s a big list of easily-deployed quickstarts.
  13. Start following the instructions at the WordPress quickstart, making sure to append your alternate config file like so:
    rhc app create -a wordpress -t php-5.3 --config=/home/jason/Desktop/express.conf
  14. Your OpenShift Origin will create the new PHP app, and then time out trying to resolve its DNS name. Since we’re interacting with our PaaS from outside of the LiveCD environment, we lose the LiveCD’s automatic DNS magic, and have to make things resolve properly on our own. I made things resolve properly by adding a line to my /etc/hosts file, associating my VM address with the my appname-domainname at the example.com domain to which the LiveCD defaults:
    192.168.122.147 wordpress-origin.example.com
  15. The DNS time-out message we received in step 14 includes a git clone command for pulling down your skeleton app structure from your PaaS instance. Run it.
  16. We need to give our wordpress app a mysql database to work with. There’s a command for this in the quickstart, to which we’ll again append our alternate config file:
    rhc app cartridge add -a wordpress -c mysql-5.1 --config=/home/jason/Desktop/express.conf
  17. We’re near the end. Next (and these steps are straight out of the WordPress quickstart) we cd into the app directory, hook up to the wordpress example git repo, pull the code down from there, and then push it up into our OpenShift Origin instance:
    cd wordpress
    git remote add upstream -m master git://github.com/openshift/wordpress-example.git
    git pull -s recursive -X theirs upstream master
    git push
  18. If you’re following along with me, you should now have a shiny new WordPress instance available at http://wordpress-origin.example.com, with your default admin user name and password listed in your terminal window following the “git push.”

So that’s it. You have your very own PaaS instance running on a local VM that won’t go away between reboots.

The open sourcing of OpenShift is a big deal, but the best PaaS is the one you don’t have to operate yourself. That’s why, as the only current downstream project implementing OpenShift Origin, Red Hat’s OpenShift service remains the best place for people to get acquainted with the project. Here’s hoping that not too much time passes before a bunch of rival implementations hit the scene to give Red Hat a run for its money!

more test

In general, I prefer Google+ to Twitter. I like posting more than 140 characters, and I like editing my posts if I need/want to (there are other things I like about G+, but this test post is about those first two). I noticed, recently, how people who post wp.me links onto Twitter get their posts, or a portion of their posts, attached to the tweet behind a little photo-style view media link. I’m messing with that right now.

I should say, though, that I’ve been feeling increasingly grumbly about Google+ and its RW API-lessness, and I do like Twitter nonetheless, and, uh, yeah.

Community Metrics Wrangling with MLstats and OpenShift

As loyal readers of this blog (if any such creatures did exist) might have noticed, I’ve been working with the oVirt project, which got a reboot last year when Red Hat finished open sourcing and porting to Java the previously .Net-based management for its enterprise virtualization product.

Given the new start for oVirt, the project has been keen to get a handle on its community metrics, such as mailing list activity: is it growing, what’s the mix of people coming from companies on the oVirt board compared to other organizations and individuals, and so on.

While searching for suitable tools for mailing list analysis, I came across mlstats, a nifty application for slurping mailing list archives into a database, where they can be queried and made to cough up all sorts of interesting information. The application is really easy to use, and I had it up and running on my local machine, offering up pearls of oVirt mailing list wisdom, in no time at all.

I wanted to set the app up on a remote server somewhere, complete with a cron job for running the daily stats update, and with some means of displaying certain information from the db, such as daily traffic on each of the oVirt lists. I opted to deploy this mlstats-o-matic on OpenShift, the Red Hat Platform as a Service, uh, service, on which I run this blog. OpenShift instances support mysql and cron, and the service’s port forwarding feature is great for enabling database access via one’s favored desktop query tools. More on that later.

The cron job I set up on OpenShift referenced a text file at containing a list of the mailing list archive page URLs for oVirt, for mlstats to chug through and deposit into the database. A second cron job ran a list of sql queries, outputting the results in csv form, which I then charted on static web pages, using the javascript library dygraphs. To make new charts, I could build additional queries and web pages, and reference them from cron jobs.

That worked fairly well for our initial oVirt needs, but since the team I work on at Red Hat is focused on working with a broad range of open source projects, I wanted to figure out how to make it easy to use this same framework with other projects. The queries and the web pages were hard coded to the particular lists I’d chosen, so my initial take wasn’t a very cleanly repeatable solution.

More hackery ensued, and I changed the setup around to use bash scripts to assemble the queries and html pages based on whatever list of mailing lists a user might select. I’m sure I could have done it all much more elegantly, but it’s all up at github where anyone is free to set me straight. :)

I set the whole thing up as an OpenShift quickstart, so if you’re interested in engaging in some mailing list analysis action of your own, head over to Github and check it out. I made a short screencast of the process to give you an idea of how easy it is to get up and running:

The cron job will go off and update the data each day, and there really isn’t any maintenance to do. If you want to get at the data with your desktop mysql query tools, the command “rhc-port-forward -a appname” will make the database at OpenShift accessible to you locally.

Please give it a shot, and let me know how it works for you!

 

oVirt or No Virt: Notebook Edition

oVirt is definitely not intended to be run on your notebook, and running something oriented toward powering whole data centers on a single, portable machine seems like overkill, anyway. For a Linux-powered notebook machine like mine, virt-manager is a great tool for spinning up all manner of VMs, and–while I’ve yet to get it running properly — GNOME Boxes offers another promising option for taking advantage of the KVM hypervisor that’s built into the Linux kernel.

However, since immersing myself in oVirt is part of my job now, and since I work with a lot of VMs on my work notebook, I wanted to see if I could come up with a notebook-friendly oVirt setup. The trouble with the single-machine rig that I described in my recent oVirt hotwo is that setting up the bridged networking that oVirt requires means disabling NetworkManager, the handy service that makes it easy to connect to VPNs and switch between WiFi connections. I wanted to avoid disabling NetworkManager.

I spent a bit of time fiddling with nested KVM — running an oVirt rig from within a VM on my machine. I was able to get this guest-within-a-guest setup working, but it was slow and unstable. Again, more sacrifice than I was willing to countenance for this.

In the end, I got my notebook-based, NetworkManager-friendly oVirt setup working by adapting the default guest networking configuration for virt-manager, in which libvirt provides a virtual network that acts like a NAT router for guest machines.

This was a little tricky, because when you configure a machine to be used as an oVirt virtualization host, libvirt gets commandeered by a higher-level component, vdsm (pdf), such that the default virtual bridge configuration is deactivated and you’re blocked from accessing libvirt directly.

In order to restore the virtual bridge setup, you have to provide libvirt with the user name vdsm@rhevh and the password you’ll find at /etc/pki/vdsm/keys/libvirt_password. I used the command line tool virsh to redefine the active “vdsm-ovirtmgmt” network to match my previous “default” network, and serve as a NAT router for the guest VMs on my notebook.

After making these changes, I rolled back the bridge-building changes to my ifcfg-em1 file, and deleted the ifcfg-ovirtmgmt file I’d created in step 12 of the howto. I also added the option “net.ipv4.ip_forward = 1” to /etc/sysctl.conf — without this tweak, my VMs were able to access the network as expected, but I wasn’t able to ssh into our otherwise reach my guests from my host machine.

I’ve had oVirt set up this way on my notebook for about a week now. So far, it’s been working really well. Before I shut off or suspend my notebook, I use the oVirt web admin in Firefox to put my host into maintenance mode. I also use the web admin to access the consoles of my guest machines through SPICE.

I’m using an Iomega ix2-200 desktop NAS box for NFS and iSCSI storage. Rich tools for setting up shared storage is one of the nice things about using oVirt instead of a desktop-oriented virtualization tool.

If the whole thing explodes, or just begins to droop into general suckage, I’ll update this post to reflect that. :)