You don’t want a custom tree, you want atomic-pkglayer

Atomic system updates are at least half of how “Atomic Hosts” earn their Fallout-flavored appellation. Where a standard Fedora, RHEL or CentOS host gets its updates from a sack of RPMs downloaded from various repositories and exploded out where appropriate, the Atomic editions of these distros consume this same software in pre-exploded-and-composed-into-an-image form.

One tricky element of consuming your RPMs in a single blob is choosing a package or two to add beyond what’s been composed into the image. I wanted to do this straightaway after learning about the atomic host concept, and I (semi)helpfully documented my progress with composing custom trees in a few different spots, most recently at: Compose Your Own Atomic Updates.

This works pretty well, but composing and rebasing to a tree of your own is sort of a heavy approach. Shouldn’t you be able to compose just part of a tree, and, like, overlay those packages on your atomic host?

OSTree mastermind Colin Walters has whipped up just such a utility, and today, I took it for a spin with CentOS Atomic Host.

I started with a CentOS Atomic Host vagrant box, which, as you’ll see, doesn’t include the fortune-mod package:

[laptop-host]$ vagrant init centos/atomic-host

[laptop-host]$ vagrant up

[laptop-host]$ vagrant ssh

[atomic-vm]$ fortune
bash: fortune: command not found

I need to grab Colin’s tool from git, which is also not included in the CentOS Atomic Host, but which is available in the friendly centos/tools container. For a bit of info about the Fedora flavor of this container, see here.

[atomic-vm]$ sudo atomic run centos/tools

[tools-container]$ cd /root

[tools-container]$ git clone https://github.com/cgwalters/atomic-pkglayer/

[tools-container]$ cd atomic-pkglayer

[tools-container]$ git checkout v2016.1

atomic-pkglayer requires ostree to function, and this package is missing from the centos/tools container, so I need to grab it from the repo below. Also, fortune-mod lives in EPEL, so I’ll install that repo as well.

[tools-container]$ curl -O https://raw.githubusercontent.com/CentOS/sig-atomic-buildscripts/downstream/rhel-atomic-rebuild.repo

[tools-container]$ mv rhel-atomic-rebuild.repo /etc/yum.repos.d/

[tools-container]$ yum install ostree epel-release -y

Now I need to grab all the rpms required for fortune-mod, and install them to a pkglayer, before exiting my tools container, rebooting my atomic VM, and logging back in to the rebooted atomic VM:

[tools-container]$ mkdir pkgs

[tools-container]$ yumdownloader --resolve --destdir=pkgs fortune-mod

[tools-container]$ /root/atomic-pkglayer/atomic-pkglayer pkgs/*rpm

[tools-container]$ exit

[atomic-vm]$ sudo reboot

[laptop-host]$ vagrant ssh

Now, for some fortune:

[atomic-vm]$ fortune
Rune's Rule:
    If you don't care where you are, you ain't lost.

You can see my local overlay:

[atomic-vm]$ sudo atomic host status
TIMESTAMP (UTC)         VERSION        ID             OSNAME                 REFSPEC                                                     
* 2016-01-29 00:30:06     local          0aa16a3e42     centos-atomic-host     <unknown origin type>                                       
  2015-10-01 09:32:09     7.20151001     1e9838ce88     centos-atomic-host     centos-atomic-host:centos-atomic-host/7/x86_64/standard     

The system is left in an un-upgradable state — I’ll need to rollback before I can grab updates again, so this overlay is temporary:

[atomic-vm]$ sudo atomic host upgrade
error: No origin/refspec in current deployment origin; cannot upgrade via ostree

[atomic-vm]$ sudo atomic host rollback
Moving '1e9838ce8879112c47c72503bbade0830e6f06dc20f5cabbf6da40a373550f69.0' to be first deployment
Transaction complete; bootconfig swap: no deployment count change: 0
Removed:
  fortune-mod-1.99.1-17.el7.x86_64
  recode-3.6-38.el7.x86_64
Successfully reset deployment order; run "systemctl reboot" to start a reboot

[atomic-vm]$ sudo systemctl reboot

[laptop-host]$ vagrant ssh

Post-rollback, the fortune command is missing once again, and my system is ready for upgrades again:

[atomic-vm]$ fortune
bash: fortune: command not found

[atomic-vm]$ sudo atomic host upgrade
Updating from: centos-atomic-host:centos-atomic-host/7/x86_64/standard

New Atomic Host verb: rpm-ostree deploy

This is cool. Also, I’m trying out reblog.

Colin Walters

TL;DR: We’ve improved the host version management in Fedora Atomic Host, and you can now use atomic host deploy $version to atomically switch to a well-known version.

Longer version:

The awesome Cockpit project has been working on a UI for managing Atomic Host/OSTree updates. See this page for some background on their design.

If you download the most recent Fedora Atomic Host release, then atomic host upgrade, you’ll get a new rpm-ostree release which in turn has a new “deploy” verb. This was created to help implement the above Cockpit design; it’s a command line talking to code equivalent to what the Cockpit UI pull request will use.

This is noteworthy for several reasons. First, it really unlocks the “server side history” aspect of OSTree for the host tree. This is similar to tagged builds in a Docker repository for a container.

In order to explain this, one…

View original post 419 more words

Cutting out the Middleman, with WordPress

For the past few years, the only posts I’ve written in this blog have been about this blog, and this post is no different. I make up for it in lack of volume: I’m averaging about one post a year.

Two years ago, I wrote about how I’d finally (almost) gotten my static blog all set up the way I wanted it, complete with self-hosted, open-source commenting functionality. Yay!

One year ago, I dumped some quick notes about how I’d re-homed my blog to Github Pages and wired it up to Github’s Travis CI service such that pushing posts or updates to my blog’s git repo would trigger a build and refresh of my site. Yay!

Of course, I still had some things to figure out, and while I eventually figured out most of it, I wasn’t blogging — I’d think about my blog, and my thoughts would immediately turn to distraction over hosting and customization and maintenance. If I’m actually going to write here, I should try to make the experience as smooth as possible, with fewer non-writing avenues around to distract me.

Middleman Retrospective

A few weeks ago, I was reflecting on my too-static blog, and on my general blog desires, which haven’t really changed since I switched away from WordPress:

(listed in order)

  1. I don’t want to admin a dynamic web app
  2. I don’t want to write/edit post in HTML
  3. I want to use open source software
  4. I want to be able to customize my blog
  5. I want an easy editing/publishing experience
  6. I want to maintain commenting support

With my Middleman-based setup, I had point one nailed. Static HTML FTW. Point two, also nailed. I wrote in either AsciiDoc or Markdown. Point three, nailed.

Point four… not really nailed. Maybe… stapled? Basic customization chores, like figuring out how to add a simple sidebar, took a long time for me to figure out, and each question I answered led to other questions. As I Googled around for answers and fiddled endlessly with CSS, my thoughts often turned to a “retreat” back to WordPress, with all its point-and-clickitude.

Point five, the easy publishing experience, also nailed. The git repo to CI to github pages process I set up worked really well, and the middleman blog-writing UI that Garrett wrote is really awesome.

Point six I had working, using Juvia comments, which, being open source software, kept me in line with point three, but took me out of compliance with point one — I didn’t want to maintain some dynamic, mysql-backed web application just for my blog, and yet, my Juvia comments instance was just that. And, as a bonus, since I first deployed it, the Juvia project has been orphaned.

WordPress: A New Hope

I mentioned above that in dark moments, I felt tempted by a return/retreat to WordPress. I figured I could use the wordpress.com service, which would satisfy my nothing-to-admin, open source, commenting, and publishing ease requirements. A combination of the many nice-looking, built-in WordPress themes and widgets, along with the (paid) option of tweaking theme CSS and whatnot, promised to satisfy my customization needs, as well.

Still, I really didn’t want to write posts in HTML, or have to hand-edit the HTML of the WordPress WYSIWYG editor. I love how markdown can be pasted straight into a email, as plain text, and remain totally readable.

It turns out that about two weeks after I quit using WordPress, the project added a “write in markdown” option to the software. You check a box somewhere, and the write-in-HTML tab in the UI becomes a write-in-markdown tab. So, that’s point two, nailed.

I signed myself up for the $99 premium WordPress subscription, which seems like a lot of money, but comes out to less than my WWE Network and Marvel Unlimited subscriptions, and goes to a company that creates and supports open source software, so… I’ll give it a year.

Getting rolling again with WordPress was easy. I pointed my domain in the right direction, picked out a theme I liked, sucked in most of my old posts from WP backups, converted the very few new ones, and modified my static posts sitting on github pages to redirect permanently over here.

We’ll see how this return to WordPress, now with markdown support and full SaaSification, affects or does not affect my personal blogging. There’s a non-zero chance I’ll be back in November 2016 to tell you all about my new LiveJournal odyssey.

Cutting in the Middleman, with Comments

I blogged somewhat recently about my interest in, and inaction around, static site blogging, where you write blog posts, use an app to turn them into plain HTML, and then drop them somewhere on the web, with no shadow of potentially/eventually vulnerable PHP and MySQL cranking away to deliver dynamically what needn’t be dynamic.

I hadn’t yet pulled the trigger on ditching WordPress yet, preferring instead to satisfy my desire for writing posts in plain AsciiDoc-formatted text by copying and pasting rendered AsciiDoc into WordPress, or using this AsciiDoc-to-WordPress script to pump in posts through the WordPress API.

Mainly, what I was missing was for one of my bad ass colleagues to take the crazy box of lego pieces that get dumped out in front of your feet when you ask Google about static site blogging, make some smart choices, and build something that I could come along and tinker with. I mentioned before that I messed around with Awestruct and found it way too raw for me. After their own more able-minded examination, my colleagues agreed, and came forward with Middleman.

Middleman It Is, But…

After poking a bit through Middleman, I felt comfy enough to adapt it for my own, extremely simple blog. I got a basic layout in place, and set about converting my WordPress posts into something workable for Middleman. My plan was to use AsciiDoc for my new writing, but most conversion scripts target the more popular Markdown. I found a script — I’ll look for the link — that did an OK job converting, but I had to delete some of the “front matter” bits that I didn’t need, and a few of my URLs rendered wrong. I’ve tried a few different tools for WordPress-to-SomethingStatic conversion, and they’ve all needed some hand-tweaking. So, low-frequency blogging FTW! I didn’t have too many posts to hand-tweak.

Now on to a REAL problem — comments. One arguably important dynamic chore tackled by WordPress is accepting and managing blog comments. Most static blogs either do away with comments all together (easy to steel yourself for this decision after reading comments at Youtube or your local newspaper’s web site for five minutes) or, sites go with the hosted Disqus comments service.

I’ve bounced between Disqus and WordPress comments in the past, and have been happy with Disqus. They take the load off your site, and allow your page (with the help of something like wp super cache) to be mostly static, since all the dynamism happens, in javascript, in your reader’s browser. Also, I like the way that Disqus knits siloed discussions from all over the web into something a bit more unified. You have posts and comment threads spread everywhere, and Disqus sort of pulls them together, and, through easy options for tweeting out a link to your comment, offers a way to pull in others.

Switching from WordPress comments to Disqus comments means switching from a possibly self-hosted system to a definitely not self-hosted system, and that’s a concern for many, particularly given the greater chances for privacy chicanery at sites out of your control. However, Disqus does a really good job importing from and exporting to WordPress, so even though I’ve swapped back and forth a few times, I’ve never had trouble getting my mitts back on my data, and that’s my number one concern with using a hosted service.

BUT, there’s still another important issue. WordPress is open source software, and Disqus is not. I’m big on open source software — I’m not opposed to using anything proprietary, not sure how I’d use my oven with a no-proprietary-ever stance, but I’m keen to see open source spread, so swapping something that’s already open to something that is not is a concern.

Enter Juvia, and OpenShift (natch)

As usual, I approached the oracle of Google and, in fairly short order, was directed to Juvia, “a commenting server similar to Disqus and IntenseDebate.” It sounded perfect, and not completely abandoned, although the demo site wasn’t working, and its discussion forum (served from the terrible terrible why-does-anyone-use-this Google Groups) appears to have been wiped from the earth. Why not more activity around what appears to be a much-needed project?

It may be because Juvia is a Ruby on Rails app, and while mysql/php hosting is handed down from the sky at little or no cost, ruby hosting is not. I saw one discussion of Juvia v. Disqus in my travels that boiled down to: “You could use Juvia, but hosting costs, so, use Disqus, which is free.”

But, that gentleman mustn’t have been aware of OpenShift, where you can host all sorts of different apps in the service’s free tier. I turned again to Google and found a few Juvia on OpenShift quickstarts. I used this one, although this one seems more official, though a bit less up-to-date.

I spun up Juvia in one of my OpenShift gears, spun up another just to host my static blog files, and poked at my layout HAML until I got them working together. I used Juvia’s WordPress comments importer to import my WordPress comments (which took some work), and here I am.

Now, I am going to write all this up into a how to, but I need to do a bit more polishing — you don’t want to follow the steps I followed, you want to follow the steps I would have followed, had future me paid me a visit first.

Till then, though, this is my first new, non-stub post in the new blog. With open source, self-hosted comments.

More AsciiDokken

A sort of funny thing happened when I was posting my last post, AsciiDokken, about how I’ve been writing and (not)blogging in AsciiDoc, and piping posts up into WordPress via blogpost.py.

The dang post wouldn’t upload!

I retried it, several times, and eventually it worked. I’m wondering if the issue I experienced has something to do with the recent WordPress 3.6.1 update.

Anyhow, it occurred to me that one thing WordPress does pretty well is accept pasted HTML content, and more or less, accurately suck in the HTML formatting.

I mentioned in my post that I was using this live preview trick suggested on the AsciiDoctor web site to do my live-previewing. Well, I’ve come across a simpler way to live preview, using a Chrome extension for the purpose. There’s also a Firefox plugin.

I installed the Asciidoctor.js Live Preview plugin, right-clicked on the red “A” that then appeared on my toolbar, and clicked the check box next to “Allow access to file URLs.”

I browsed, in Chrome, to the directory where I keep my in-progress writings, and used the “Create Application Shortcuts” function under “Tools” to convert my directory listing tab into its own app launcher.

Then, I hit the command line and visited my “~/.local/share/applications/” directory in search of the launcher file Chrome created (it begins with “chrome-“).

I tacked “–allow-file-access –enable-apps” onto the end of the line beginning “Exec=”, changed the line beginning “Name=” to include a suitable name, and changed the line beginning “Icon=” to point to a suitable icon for my app:

adoc-icon

Then, when I want to write, I pop open my text editor of choice, open up my new live preview Web app, write, and see my words appear in all their AsciiDoc-formatted glory:

live-preview

When it’s time to publish, I can either use blogpost.py (which, as I’ve mentioned in the past, handily handles image uploading, but, as I’ve mentioned today, is caught up in some amount of brokenness), or just highlight what’s in my live preview and dump it into WordPress, before manually uploading the images.

AsciiDokken

asciidokken

It’s been a long time since I’ve blogged. My last oVirt 3.2 howto has been holding down the front page of this site for a lot of months, and now oVirt 3.3 is just around the corner.

Top “haven’t blogged” excuses:

  • Such are blogs, they go unupdated, and blog posts often start with “it’s been a long time since I blogged” (see above).
  • I’ve been expending a bit of my blogging chi by robotically filling and tweaking the links queue that feeds @redhatopen.
  • I’ve been gripped somewhat by analysis paralysis over staticly generated site blogging and writing in AsciiDoc.

It’s this third excuse I’m blogging about today.

See, I like to write in plain text — I start out writing almost everything in Tomboy or, if I’m feeling extra distracted, PyRoom. The trouble is, plain text isn’t “print” ready (and by print ready, I really mean web ready). Beyond plain text, you need some formatting, at the very least, Web links, a few code blocks, a subhead or two.

Formatting is lame and boring and adds friction to my writing experience. The way I’ve done it, for years, is to do it after the writing’s done, and to undertake a separate formatting pass for every spot I intend to publish — is this for the Web, where on the Web? Mediawiki? WordPress? Other?

I particularly hate writing in word processors, they’re all about formatting, and yet the formatting they produce often isn’t appropriate for most places you’ll end up publishing. For instance, word processors produce famously junky HTML.

Enter AsciiDoc

My collegaue Dan Allen has been spreading the gospel of AsciiDoc, a lightweight plain text markup language, and of Asciidoctor, a Ruby processor for converting AsciiDoc source files and strings into HTML 5, DocBook 4.5 and other formats.

With my plain text orientation, annoyance with formatting gunk, and deep dissatisfaction with word processors, AsciiDoc appealled to me. I know that Markdown is teh hotness, sort of, but AsciiDoc’s formatting for my #1 use case, inserting hyperlinks, is simpler than that for Markdown, and AsciiDoc seems better aligned with my needs overall.

As Dan promised, I found it very easy to get rolling with AsciiDoc. You just write, the formatting is simple, and you can do all the sorts of things you need to do, the first time through.

It’s simple to add links and images, and AsciiDoc’s handling of bullets and numbering has made life easier writing posts and howtos.

In fact, after writing in AsciiDoc for the past couple months, I found the other day that I had to look up the syntax for HTML link tags. In AsciiDoc, it’s URL[text] and that’s it.

BUT, while you can just start writing in AsciiDoc, you do need some application support to get the full benefit from it. For instance, it’s helpful to get a preview of how your formatted text will render, particularly while learning the syntax. My text editing tools don’t offer this for AsciiDoc, though I’ve been pleased with the setup suggested in this Editing w/ Live Preview howto on the Asciidoctor site.

The biggest issue, however, is publishing. My blog runs on WordPress, as do a few of the blogs I contribute to for work, and WordPress doesn’t know anything about AsciiDoc. There is, however, a family of blogging engines savvy to AsciiDoc: the Static Site Generators.

Jekyll, Hyde, and Friends

I’ve been interested in the concept of “blogging like a hacker” with a static site generator for some time now. Having a speedy, scaleable blog that needs no software updates and could be hosted from something like Amazon S3 sounds really cool to me.

Now, I love WordPress. I do. It’s this big old ball of open source goodness, with a community of users, plugin developers, designers, bloggers, etc. Honestly, yay!

But…

WordPress Vulnerability of the Day means a constant sense of low-level discomfort — am I up to date? What about my plugins? Are they up to date? And have the latest updates broken compatitbility between plugin and core, somehow?

It’s really easy to get going with a nice, functional blog with WordPress. My blog has always been really simple — I made a child theme based on the WordPress 2012 theme simply to hide the gigantic header image, and I may have made a CSS tweak or two.

But, some of the work-related WordPress sites I’ve been involved with have required more customization, and when you’re trying to understand how all the parts of a WordPress site fit together, to customize or debug something, it feels crazy — everything’s exploded out into a billion different places.

Also, the more I use git (which I really started getting into through OpenShift), the more I want to use it, or have the option of using it, for everything. I want to use git for managing posts and such, and WordPress stores everying in a database.

And returning to the formatting issue, formatting in WordPress can be a pain. It works like a PHP-based word processor in the sky, for the most part, you WYSIWYG your way along, clicking toolbars and such, but I always need to dip into the HTML view and tweak some things, which I don’t love.

My blog isn’t very dynamic, so I don’t need a bunch of PHP code cranking away at every click. I’ve been using Disqus comments, where the dynamic bits happen in the visitor’s browser, so my site could easily be static. In fact, I use wp-super-cache on my site, for performance benefit, so my blog is sort of static anyway.

So, between my interest in AsciiDoc and static site generators, and my itching to make a move from WordPress, I figured I’d soon jump from WordPress, to… something else.

I’ve fiddled with a few different options, including Octopress, Pelican, Hyde, and Awestruct (another project I hear about through Dan Allen).

None of these have been super tough to get up and running, but as with all static site generators, there’s some assembly required, and I have plenty of other bits of software to fiddle with.

Converting my posts from WordPress to Awestruct et al is a thing, too, so I’d have to deal with (re)formatting those posts before I started using AsciiDoc for my workflow, and that means worrying about formatting and other distraction before I can start not worrying about formatting and other distraction.

So there’s the blog/writing/workflow/migration holding pattern for you.

AsciiDokken

I mentioned, though, that I’ve been using AsciiDoc for a couple months now, and this blog and others are running WordPress. I’ve been using a little tool for posting AsciiDoc-formatted texts to WordPress, which has enabled me to start blogging in AsciiDoc without blogging like a hacker. It works pretty well, and handles image uploading, which is nice.

I keep my AsciiDoc-formatted posts in a folder on my notebook, with git version control, and I push posts and post updates to WordPress through its API, using the blogpost tool.

Just the other day, I spun myself a fresh WordPress blog on OpenShift, with this spiffy new 2013 theme (where disabling the giant header image is an out-of-the-box customization option).

So, maybe I’m staying with WordPress for a while.

At least, I shouldn’t let indecision over markup and site generation block the flow of public navel-gazing about indecision over markup and site generation. To that end, I’ve started looking into directing more love toward that AsciiDoc-to-WordPress uploader.

Up and Running with oVirt, 3.2 Edition

I’ve written an updated version of this howto for oVirt 3.3 at the Red Hat Community blog.

The latest version of the open source virtualization platform, oVirt, has arrived, which means it’s time for the third edition of my “running oVirt on a single machine” blog post. I’m delighted to report that this ought to be the shortest (and least-updated, I hope) post of the three so far.

When I wrote my first “Up and Running” post last year, getting oVirt running on a single machine was more of a hack than a supported configuration. Wrangling large groups of virtualization hosts is oVirt’s reason for being. oVirt is designed to run with its manager component, its virtualization hosts, and its shared storage all running on separate pieces of hardware. That’s how you’d want it set up for production, but a project that requires a bunch of hardware just for kicking the tires is going to find its tires un-kicked.

Fortunately, this changed in August’s oVirt 3.1 release, which shipped with an All-in-One installer plugin, but, as a glance at the volume of strikethrough text and UPDATE notices in my post for that release, there were more than a few bumps in the 3.1 road.

In oVirt 3.2, the process has gotten much smoother, and should be as simple as setting up the oVirt repo, installing the right package, and running the install script. Also, there’s now a LiveCD image available that you can burn onto a USB stick, boot a suitable system from, and give oVirt a try without installing anything. The downsides of the LiveCD are its size (2.1GB) and the fact that it doesn’t persist. But, that second bit is one of its virtues, as well. The All in One setup I describe below is one that you can keep around for a while, if that’s what you’re after.

Without further ado, here’s how to get up and running with oVirt on a single machine:

HARDWARE REQUIREMENTS: You need a machine with x86-64 processors with hardware virtualization extensions. This bit is non-negotiable–the KVM hypervisor won’t work without them. Your machine should have at least 4GB of RAM. Virtualization is a RAM-hungry affair, so the more memory, the better. Keep in mind that any VMs you run will need RAM of their own.

It’s possible to run an oVirt in a virtual machine–I’ve taken to testing oVirt on oVirt itself most of the time–but your virtualization host has to be set up for nested KVM for this to work. I’ve written a bit about running oVirt in a VM here.

SOFTWARE REQUIREMENTS: oVirt is developed on Fedora, and any given oVirt release tends to track the most recent Fedora release. For oVirt 3.2, this means Fedora 18. I run oVirt on minimal Fedora configurations, installed from the DVD or the netboot images. With oVirt 3.1, a lot of people ran into trouble installing oVirt on the default LiveCD Fedora media, largely due to conflicts with NetworkManager. When I teseted 3.2 with the With 3.2, the installer script disabled NM on its own, but I had to manually enable sshd (sudo service sshd start && sudo chkconfig sshd on).

A lot of oVirt community members run the project on CentOS or Scientific Linux using packages built by Andrey Gordeev, and official packages for these “el6” distributions are in the works from the oVirt project proper, and should be available soon for oVirt 3.2. I’ve run oVirt on CentOS in the past, but right now I’m using Fedora 18 for all of my oVirt machines, in order to get access to new features like the nested KVM I mentioned earlier.

NETWORK REQUIREMENTS: Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from. If you take the hosts file editing route, the installer script will complain about the hostname–you can safely forge ahead.

CONFIGURE THE REPO: Somewhat confusingly, oVirt 3.1 is already in the Fedora 18 repositories, but due to some packaging issues I’m not fully up-to-speed on, that version of oVirt is missing its web admin console. In any case, we’re installing the latest, 3.2 version of oVirt, and for that we must configure our Fedora 18 system to use the oVirt project’s yum repository.

sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm

SILENCING SELINUX (OPTIONAL): I typically run my systems with SELinux in enforcing mode, but it’s a common source of oVirt issues. Right now, there’s definitely one (now fixed), and maybe two SELinux-related bugs affecting oVirt 3.2. So…

sudo setenforce 0

To make this setting persist across reboots, edit the ‘SELINUX=’ line in  /etc/selinux/config to equal ‘permissive’.

INSTALL THE ALL IN ONE PLUGIN: The package below will pull in everything we need to run oVirt Engine (the management server) as well as turn this management server into a virtualization host.

sudo yum install ovirt-engine-setup-plugin-allinone

RUN THE SETUP SCRIPT: Run the script below and answer all the questions. In almost every case, you can stick to the default answers. Since we’re doing an All in One install, I’ve tacked the relevant argument onto the command below. You can run “engine-setup -h” to check out all available arguments.

One of the questions the installer will ask deals with whether and which system firewall to configure. Fedora 18 now defaults to Firewalld rather than the more familiar iptables. In the handful of tests I’ve done with the 3.2 release code, I’ve had both success and failure configuring Firewalld through the installer. On one machine, throwing SELinux into permissive mode allowed the Firewalld config process to complete, and on another, that workaround didn’t work.

If you choose the iptables route, make sure to disable Firewalld and enable iptables before you run the install script (sudo service firewalld stop && sudo chkconfig firewalld off && sudo service iptables start && sudo chkconfig iptables on).

sudo engine-setup --config-allinone=yes

TO THE ADMIN CONSOLE: When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, click the “Storage” tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to activate it.

Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. I made the “creating VMs” screencast below for oVirt 3.1, but the process hasn’t changed significantly for 3.2:

[youtube:http://www.youtube.com/watch?v=C4gayV6dYK4&HTML5=1%5D

Gluster Rocks the Vote

Rock the Vote needed a way to manage the fast growth of the data handled by its Web-based voter registration application. The organization turned to GlusterFS replicated volumes to allow for filesystem size upgrades on its virtualized hosting infrastructure without incurring downtime.

Over its twenty-one year history, Rock the Vote has registered more than five million young people to vote, and has become a trusted source of information about registering to vote and casting a ballot.

rtv

Since 2009, Rock the Vote has run a Web-based voter registration application, powered by an open source rails application stack called Rocky.

I talked to Lance Albertson, Associate Director of Operations at the Oregon State University Open Source Lab and primary technical systems operation lead for the service, about how they’re using Gluster to provide for the service’s growing storage requirements.

“During a non-election season,” Albertson explained, “the filesystem use and growth is minimal, however during a presidential election season, the growth of the filesystem can be exponential. So with Gluster we’re trying to solve the sudden growth problem we have.”

Rock the Vote’s voter registration application is served from a virtual machine instance running Gentoo Hardened, with a pair of physical servers running CentOS 6 with Gluster 3.3.0 to host voter registration form data. The storage nodes host a replicated GlusterFS volume, which the registration front end accesses via Gluster’s NFS mount support.

The Gluster-backed iteration of the voter registration application started out in September with a 100GB volume, which the team stepped up incrementally to 350GB as usage grew in the period leading up to the election.

Before implementing Gluster for their storage needs, Rock the Vote’s application hosting team was using local storage within their virtual machines to store the voter form data, which made it difficult to expand storage without bringing their VMs down to do so.

The hosting team shifted storage to an HA NFS cluster, but found the implementation fragile and prone to breakage when adding/removing NFS volumes and shares.

“Gluster allowed us more flexibility in how we manage that storage without downtime,” Albertson continued, “Gluster made it easy to add a volume and grow it as we needed.”

Looking ahead to future election seasons, and forthcoming GlusterFS releases, Albertson told me that the Gluster attribute he’s most interested in is limited-downtime upgrades between version 3.3.0 and future Gluster releases. Albertson is also looking forward to the addition of multi-master support in Gluster’s geo-replication capability, an enhancement planned for the upcoming 3.4 version.

oVirt on oVirt: Nested KVM Fu

I’m a big fan of virtualization — the ability to take a server and slice it up into a bunch of virtual machines makes life trying out and writing about software much, much easier than it’d be in a one instance per server world.

Things get tricky, however, when the software you want to try out is itself intended for hosting virtual machines. These days, all of the virtualization work I do centers around the KVM hypervisor, which relies on hardware extensions to do its thing.

Over the past year or so, I’ve dabbled in Nested Virtualization with KVM, in which the KVM hypervisor passes its hardware-assisted prowess on to guest instances to enable those guest to host VMs of their own. When I first dabbled in this, ten or so months ago, my nested virtualization only sort-of worked — my VMs proved unstable, and I shelved further investigation for a while.

Recently, though, nested KVM has been working pretty well for me, both on my notebook and on some of the much larger machines in our lab. In fact, with the help of a new feature  slated for oVirt 3.2, I’ve taken to testing whole oVirt installs, complete with live migration between hosts, all within a single host oVirt machine. Pretty sweet, since oVirt forms both my main testing platform and one of the primary projects I look to test.

All my tests with nested KVM have been with Intel hardware, because that’s what I have in my labs, but it’s my understanding that nested KVM works with AMD processors as well, and that the feature is actually more mature on that gear.

To join in on the nested fun, you must first check to see if nested KVM support is enabled on your machine by running:

cat /sys/module/kvm_intel/parameters/nested

If the answer is “N,” you can enable it by running:

echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf

After adding that kvm-intel.conf file, reboot your machine, after which “cat /sys/module/kvm_intel/parameters/nested” should return “Y.”

I’ve used nested KVM with virt-manager, the libvirt front-end that ships with most Linux distributions, including my own distro of choice, Fedora. With virt-manager, I configure the VM I want to use as a hypervisor-within-a-hypervisor by clicking on the “Processor” item in the VM details view, and clicking the “Copy host configuration” button to ensure that my guest instance boots with the same set of CPU features offered by my host processor. For good measure, I expand the “CPU Features” menu list and ensure that the feature “vmx” is set to “require.”

virt-manager-nested

Not too taxing, but it turns out that with oVirt, enabling nested virtualization in guests is even easier, thanks to VDSM hooks. VDSM hooks are scripts executed on the host when key events occur. The version of VDSM that will accompany oVirt 3.2 includes a nestedvt hook that does exactly what I described above — it runs a check for nested KVM support, and if that support is found, it adds the require vmx element to your VM’s definition.

I’ve tested this both with oVirt 3.2 alpha, and with the current, oVirt 3.1 version. In the latter case, I simply installed the vdsm-hook-nestedvt package from oVirt’s nightly repository, and it worked fine with the current stable version of vdsm.

ovirtonovirt

I mentioned above that I’ve been able to test oVirt on oVirt in this way, and performance hasn’t been remarkably bad, but I wanted to get a better handle on the performance hit of nesting. I settled, unscientifically, on running mock builds of the ovirt-engine source package, a real life task that involves CPU and I/O work.

I ran the build operation four times on a VM running under oVirt, and four times on a VM running under an oVirt instance which was itself running under oVirt. I outfitted both the nested and the non-nested VM with 4GB of RAM and two virtual cores. I was using the same physical machine for both VMs, but I ran the tests one at a time, rather than in .parallel.

The four builds on the “real” VM averaged out to 14 minutes, 15 seconds, and the build quartet on the nested VM averaged 28 minutes, 18 seconds. So, I recorded a definite performance hit with the nested virtualization, but not a big enough hit to dissuade me from further nested KVM exploration.

Speaking of further exploration, I’m looking very forward to attending the next oVirt Workshop later this month, which will take place at NetApp’s Sunnyvale campus from Jan 22-24.

If you’re in the Bay Area and you’d like to learn more about oVirt, I’d love to see you there. The event is free of charge (much like oVirt itself) and all the agenda and registration details are available on the oVirt project site at http://www.ovirt.org/NetApp_Workshop_January_2013. Registration closes on Jan 15th, so get on it!

Gluster User Story: Fedora Hosted

The Fedora Project’s infrastructure team needed a way to ensure the reliability of its Fedora Hosted service, while making the most of their available hardware resources. The team tapped GlusterFS replicated volumes to convert what had been a two-node, active/passive, eventually consistent hosting configuration into a well-synchronized setup in which both nodes could take on user load.

Hosting Fedora Hosted

The Fedora Infrastructure team develops, deploys, and maintains various services for the Fedora Project. One of these services, Fedora Hosted, provides open source projects with a place to host their code and collaborate online.

I talked to the team’s Infrastructure Lead, Kevin Fenzi, about how they’re using Gluster to ensure availability of these services while making the most of their server resources.

Fedora Hosted is served from a pair of virtual instances hosted at serverbeach.com, which donates these resources to the project. The instances run Red Hat Enterprise Linux 6 and maintain a replicated GlusterFS 3.3.0 volume to keep the 50GB of project data stored at Fedora Hosted in sync. The nodes use Gluster’s NFS mount support, which the team found to deliver better performance with the many small files that Fedora Hosted serves.

“Both servers are in DNS, so it’s round robin which one you hit for any given connection. Since the data on the backend is replicated, both of them are up to date at any given time,” Kevin explained. “This way, not only can we handle more load cpu-wise, but if we wish to reboot one node for an update or the like, we simply adjust DNS and there is no outage seen by our projects.”

The Road to Gluster

An earlier incarnation of Fedora Hosted was also run on a pair of virtual instances, one actively serving users and the other a standby kept in sync with an hourly rsync job. If the primary node failed, the standby instance could be brought up in short order, but the hourly sync window meant that the service could suffer an hour or two of data loss.

The Fedora Infrastructure team managed to close this sync window by shifting to a new configuration based on the DRBD project. While this solution dealt with the problem of data loss following an outage, the configuration left one node mostly idle.

The team’s first foray into a GlusterFS-backed configuration for Fedora Hosted turned up a couple of issues with the then-current GlusterFS version 3.2, which the Gluster project addressed in their 3.3 release.

“The Gluster folks were very responsive to our issues and were working on the patch very soon after we requested it,” Kevin explained. “Additionally, 3.3 performance seemed to be much better than 3.2 for our use cases.”

Looking ahead, Kevin and the other members of the Infrastructure team have their eyes set on continued performance enhancements. While the Gluster 3.3-backed Fedora Hosted service has handled its community collaboration load quite well, Kevin pointed out that “we could always want better performance.”