CentOS SIG and Variant Activity

The CentOS Project is increasing its efforts to empower contributors to produce and collaborate on new CentOS Variants, in which groups of contributors combine the CentOS core with newer or otherwise custom components to suit that group’s needs.

Xen4CentOS, which combines CentOS 6 with components from the Xen project and the “longterm maintenance” release of the Linux kernel, is an example of an existing variant project. For more on variants, refer to the CentOS Project site and the CentOS and Variants section of our FAQ.

The contributor groups behind variants are called Special Interest Groups. For more on SIGs, refer to the CentOS Project wiki.

via CentOS SIG and Variant Activity — Red Hat Open Source Community.

Cutting in the Middleman, with Comments

I blogged somewhat recently about my interest in, and inaction around, static site blogging, where you write blog posts, use an app to turn them into plain HTML, and then drop them somewhere on the web, with no shadow of potentially/eventually vulnerable PHP and MySQL cranking away to deliver dynamically what needn’t be dynamic.

I hadn’t yet pulled the trigger on ditching WordPress yet, preferring instead to satisfy my desire for writing posts in plain AsciiDoc-formatted text by copying and pasting rendered AsciiDoc into WordPress, or using this AsciiDoc-to-WordPress script to pump in posts through the WordPress API.

Mainly, what I was missing was for one of my bad ass colleagues to take the crazy box of lego pieces that get dumped out in front of your feet when you ask Google about static site blogging, make some smart choices, and build something that I could come along and tinker with. I mentioned before that I messed around with Awestruct and found it way too raw for me. After their own more able-minded examination, my colleagues agreed, and came forward with Middleman.

Middleman It Is, But…

After poking a bit through Middleman, I felt comfy enough to adapt it for my own, extremely simple blog. I got a basic layout in place, and set about converting my WordPress posts into something workable for Middleman. My plan was to use AsciiDoc for my new writing, but most conversion scripts target the more popular Markdown. I found a script — I’ll look for the link — that did an OK job converting, but I had to delete some of the “front matter” bits that I didn’t need, and a few of my URLs rendered wrong. I’ve tried a few different tools for WordPress-to-SomethingStatic conversion, and they’ve all needed some hand-tweaking. So, low-frequency blogging FTW! I didn’t have too many posts to hand-tweak.

Now on to a REAL problem — comments. One arguably important dynamic chore tackled by WordPress is accepting and managing blog comments. Most static blogs either do away with comments all together (easy to steel yourself for this decision after reading comments at Youtube or your local newspaper’s web site for five minutes) or, sites go with the hosted Disqus comments service.

I’ve bounced between Disqus and WordPress comments in the past, and have been happy with Disqus. They take the load off your site, and allow your page (with the help of something like wp super cache) to be mostly static, since all the dynamism happens, in javascript, in your reader’s browser. Also, I like the way that Disqus knits siloed discussions from all over the web into something a bit more unified. You have posts and comment threads spread everywhere, and Disqus sort of pulls them together, and, through easy options for tweeting out a link to your comment, offers a way to pull in others.

Switching from WordPress comments to Disqus comments means switching from a possibly self-hosted system to a definitely not self-hosted system, and that’s a concern for many, particularly given the greater chances for privacy chicanery at sites out of your control. However, Disqus does a really good job importing from and exporting to WordPress, so even though I’ve swapped back and forth a few times, I’ve never had trouble getting my mitts back on my data, and that’s my number one concern with using a hosted service.

BUT, there’s still another important issue. WordPress is open source software, and Disqus is not. I’m big on open source software — I’m not opposed to using anything proprietary, not sure how I’d use my oven with a no-proprietary-ever stance, but I’m keen to see open source spread, so swapping something that’s already open to something that is not is a concern.

Enter Juvia, and OpenShift (natch)

As usual, I approached the oracle of Google and, in fairly short order, was directed to Juvia, “a commenting server similar to Disqus and IntenseDebate.” It sounded perfect, and not completely abandoned, although the demo site wasn’t working, and its discussion forum (served from the terrible terrible why-does-anyone-use-this Google Groups) appears to have been wiped from the earth. Why not more activity around what appears to be a much-needed project?

It may be because Juvia is a Ruby on Rails app, and while mysql/php hosting is handed down from the sky at little or no cost, ruby hosting is not. I saw one discussion of Juvia v. Disqus in my travels that boiled down to: “You could use Juvia, but hosting costs, so, use Disqus, which is free.”

But, that gentleman mustn’t have been aware of OpenShift, where you can host all sorts of different apps in the service’s free tier. I turned again to Google and found a few Juvia on OpenShift quickstarts. I used this one, although this one seems more official, though a bit less up-to-date.

I spun up Juvia in one of my OpenShift gears, spun up another just to host my static blog files, and poked at my layout HAML until I got them working together. I used Juvia’s WordPress comments importer to import my WordPress comments (which took some work), and here I am.

Now, I am going to write all this up into a how to, but I need to do a bit more polishing — you don’t want to follow the steps I followed, you want to follow the steps I would have followed, had future me paid me a visit first.

Till then, though, this is my first new, non-stub post in the new blog. With open source, self-hosted comments.

More AsciiDokken

A sort of funny thing happened when I was posting my last post, AsciiDokken, about how I’ve been writing and (not)blogging in AsciiDoc, and piping posts up into WordPress via blogpost.py.

The dang post wouldn’t upload!

I retried it, several times, and eventually it worked. I’m wondering if the issue I experienced has something to do with the recent WordPress 3.6.1 update.

Anyhow, it occurred to me that one thing WordPress does pretty well is accept pasted HTML content, and more or less, accurately suck in the HTML formatting.

I mentioned in my post that I was using this live preview trick suggested on the AsciiDoctor web site to do my live-previewing. Well, I’ve come across a simpler way to live preview, using a Chrome extension for the purpose. There’s also a Firefox plugin.

I installed the Asciidoctor.js Live Preview plugin, right-clicked on the red “A” that then appeared on my toolbar, and clicked the check box next to “Allow access to file URLs.”

I browsed, in Chrome, to the directory where I keep my in-progress writings, and used the “Create Application Shortcuts” function under “Tools” to convert my directory listing tab into its own app launcher.

Then, I hit the command line and visited my “~/.local/share/applications/” directory in search of the launcher file Chrome created (it begins with “chrome-“).

I tacked “–allow-file-access –enable-apps” onto the end of the line beginning “Exec=”, changed the line beginning “Name=” to include a suitable name, and changed the line beginning “Icon=” to point to a suitable icon for my app:

adoc-icon

Then, when I want to write, I pop open my text editor of choice, open up my new live preview Web app, write, and see my words appear in all their AsciiDoc-formatted glory:

live-preview

When it’s time to publish, I can either use blogpost.py (which, as I’ve mentioned in the past, handily handles image uploading, but, as I’ve mentioned today, is caught up in some amount of brokenness), or just highlight what’s in my live preview and dump it into WordPress, before manually uploading the images.

oVirt 3.3, Glusterized

The All-in-One install I detailed in Up and Running with oVirt 3.3 includes everything you need to run virtual machines and get a feel for what oVirt can do, but the downside of the local storage domain type is that it limits you to that single All in One (AIO) node.

You can shift your AIO install to a shared storage configuration to invite additional nodes to the party, and oVirt has supported the usual shared storage suspects such as NFS and iSCSI since the beginning.

New in oVirt 3.3, however, is a storage domain type for GlusterFS that takes advantage of Gluster’s new libgfapi feature to boost performance compared to FUSE or NFS-based methods of accessing Gluster storage with oVirt.

With a GlusterFS data center in oVirt, you can distribute your storage resources right alongside your compute resources. As a new feature, GlusterFS domain support is rougher around the edges than more established parts of oVirt, but once you get it up and running, it’s worth the trouble.

via oVirt 3.3, Glusterized — Red Hat Open Source Community.

AsciiDokken

asciidokken

It’s been a long time since I’ve blogged. My last oVirt 3.2 howto has been holding down the front page of this site for a lot of months, and now oVirt 3.3 is just around the corner.

Top “haven’t blogged” excuses:

  • Such are blogs, they go unupdated, and blog posts often start with “it’s been a long time since I blogged” (see above).
  • I’ve been expending a bit of my blogging chi by robotically filling and tweaking the links queue that feeds @redhatopen.
  • I’ve been gripped somewhat by analysis paralysis over staticly generated site blogging and writing in AsciiDoc.

It’s this third excuse I’m blogging about today.

See, I like to write in plain text — I start out writing almost everything in Tomboy or, if I’m feeling extra distracted, PyRoom. The trouble is, plain text isn’t “print” ready (and by print ready, I really mean web ready). Beyond plain text, you need some formatting, at the very least, Web links, a few code blocks, a subhead or two.

Formatting is lame and boring and adds friction to my writing experience. The way I’ve done it, for years, is to do it after the writing’s done, and to undertake a separate formatting pass for every spot I intend to publish — is this for the Web, where on the Web? Mediawiki? WordPress? Other?

I particularly hate writing in word processors, they’re all about formatting, and yet the formatting they produce often isn’t appropriate for most places you’ll end up publishing. For instance, word processors produce famously junky HTML.

Enter AsciiDoc

My collegaue Dan Allen has been spreading the gospel of AsciiDoc, a lightweight plain text markup language, and of Asciidoctor, a Ruby processor for converting AsciiDoc source files and strings into HTML 5, DocBook 4.5 and other formats.

With my plain text orientation, annoyance with formatting gunk, and deep dissatisfaction with word processors, AsciiDoc appealled to me. I know that Markdown is teh hotness, sort of, but AsciiDoc’s formatting for my #1 use case, inserting hyperlinks, is simpler than that for Markdown, and AsciiDoc seems better aligned with my needs overall.

As Dan promised, I found it very easy to get rolling with AsciiDoc. You just write, the formatting is simple, and you can do all the sorts of things you need to do, the first time through.

It’s simple to add links and images, and AsciiDoc’s handling of bullets and numbering has made life easier writing posts and howtos.

In fact, after writing in AsciiDoc for the past couple months, I found the other day that I had to look up the syntax for HTML link tags. In AsciiDoc, it’s URL[text] and that’s it.

BUT, while you can just start writing in AsciiDoc, you do need some application support to get the full benefit from it. For instance, it’s helpful to get a preview of how your formatted text will render, particularly while learning the syntax. My text editing tools don’t offer this for AsciiDoc, though I’ve been pleased with the setup suggested in this Editing w/ Live Preview howto on the Asciidoctor site.

The biggest issue, however, is publishing. My blog runs on WordPress, as do a few of the blogs I contribute to for work, and WordPress doesn’t know anything about AsciiDoc. There is, however, a family of blogging engines savvy to AsciiDoc: the Static Site Generators.

Jekyll, Hyde, and Friends

I’ve been interested in the concept of “blogging like a hacker” with a static site generator for some time now. Having a speedy, scaleable blog that needs no software updates and could be hosted from something like Amazon S3 sounds really cool to me.

Now, I love WordPress. I do. It’s this big old ball of open source goodness, with a community of users, plugin developers, designers, bloggers, etc. Honestly, yay!

But…

WordPress Vulnerability of the Day means a constant sense of low-level discomfort — am I up to date? What about my plugins? Are they up to date? And have the latest updates broken compatitbility between plugin and core, somehow?

It’s really easy to get going with a nice, functional blog with WordPress. My blog has always been really simple — I made a child theme based on the WordPress 2012 theme simply to hide the gigantic header image, and I may have made a CSS tweak or two.

But, some of the work-related WordPress sites I’ve been involved with have required more customization, and when you’re trying to understand how all the parts of a WordPress site fit together, to customize or debug something, it feels crazy — everything’s exploded out into a billion different places.

Also, the more I use git (which I really started getting into through OpenShift), the more I want to use it, or have the option of using it, for everything. I want to use git for managing posts and such, and WordPress stores everying in a database.

And returning to the formatting issue, formatting in WordPress can be a pain. It works like a PHP-based word processor in the sky, for the most part, you WYSIWYG your way along, clicking toolbars and such, but I always need to dip into the HTML view and tweak some things, which I don’t love.

My blog isn’t very dynamic, so I don’t need a bunch of PHP code cranking away at every click. I’ve been using Disqus comments, where the dynamic bits happen in the visitor’s browser, so my site could easily be static. In fact, I use wp-super-cache on my site, for performance benefit, so my blog is sort of static anyway.

So, between my interest in AsciiDoc and static site generators, and my itching to make a move from WordPress, I figured I’d soon jump from WordPress, to… something else.

I’ve fiddled with a few different options, including Octopress, Pelican, Hyde, and Awestruct (another project I hear about through Dan Allen).

None of these have been super tough to get up and running, but as with all static site generators, there’s some assembly required, and I have plenty of other bits of software to fiddle with.

Converting my posts from WordPress to Awestruct et al is a thing, too, so I’d have to deal with (re)formatting those posts before I started using AsciiDoc for my workflow, and that means worrying about formatting and other distraction before I can start not worrying about formatting and other distraction.

So there’s the blog/writing/workflow/migration holding pattern for you.

AsciiDokken

I mentioned, though, that I’ve been using AsciiDoc for a couple months now, and this blog and others are running WordPress. I’ve been using a little tool for posting AsciiDoc-formatted texts to WordPress, which has enabled me to start blogging in AsciiDoc without blogging like a hacker. It works pretty well, and handles image uploading, which is nice.

I keep my AsciiDoc-formatted posts in a folder on my notebook, with git version control, and I push posts and post updates to WordPress through its API, using the blogpost tool.

Just the other day, I spun myself a fresh WordPress blog on OpenShift, with this spiffy new 2013 theme (where disabling the giant header image is an out-of-the-box customization option).

So, maybe I’m staying with WordPress for a while.

At least, I shouldn’t let indecision over markup and site generation block the flow of public navel-gazing about indecision over markup and site generation. To that end, I’ve started looking into directing more love toward that AsciiDoc-to-WordPress uploader.

Up and Running with oVirt 3.3

The oVirt Project is now putting the finishing touches on version 3.3 of its KVM-based virtualization management platform. The release will be feature-packed, including expanded support for Gluster storage, new integration points for OpenStack’s Neutron networking and Glance image services, and a raft of new extensibility and usability upgrades.

oVirt 3.3 also sports an overhauled All-in-One (AIO) setup plugin, which makes it easy to get up and running with oVirt on a single machine to see what oVirt can do for you.

via Up and Running with oVirt 3.3 — Red Hat Open Source Community.

Testing oVirt 3.3 with Nested KVM

We’re nearing the release of oVirt 3.3, and I’ve been testing out all the new features — and using oVirt to do it, courtesy of nested KVM.

KVM takes advantage of virtualization-enabling hardware extensions that most recent processors provide. Nested KVM enables KVM hypervisors to make these extensions available to their guest instances.

Nested KVM typically takes takes a bit of configuration to get up and running: on the host side, you need to make sure that nested virtualization is enabled, and on the guest side, you need to make sure that your guest VM’s is emulating a virt-capable processor.

With oVirt, you can take care of both the host and guest configuration chores by installing a vdsm hook on your host machine(s):

via Testing oVirt 3.3 with Nested KVM — Red Hat Open Source Community.

OpenStack Summit 2013 report

Last week, I attended my first OpenStack Summit as part of a team from Red Hat helping to launch a new community distribution of the popular open source infrastructure as a service (IaaS) project.

I came away from the Summit impressed with the size and velocity of OpenStack. The conference drew some 3000 users, developers, and members of the vendor community, roughly twice the draw from the previous Summit. What’s more, several of OpenStack’s component sub-projects reported a doubling in the number of active developers in their ranks over the past six months.

What impressed me more than the growth of the project, however, was the way that OpenStack embodies one of things I love most about open source, namely, its knack for helping people to peel back or eliminate barriers to innovation. The more freedom we afford ourselves to experiment with and improve the systems we care about, the more amazing results we can achieve.

via OpenStack Summit 2013 report | Opensource.com.

Up and Running with oVirt, 3.2 Edition

I’ve written an updated version of this howto for oVirt 3.3 at the Red Hat Community blog.

The latest version of the open source virtualization platform, oVirt, has arrived, which means it’s time for the third edition of my “running oVirt on a single machine” blog post. I’m delighted to report that this ought to be the shortest (and least-updated, I hope) post of the three so far.

When I wrote my first “Up and Running” post last year, getting oVirt running on a single machine was more of a hack than a supported configuration. Wrangling large groups of virtualization hosts is oVirt’s reason for being. oVirt is designed to run with its manager component, its virtualization hosts, and its shared storage all running on separate pieces of hardware. That’s how you’d want it set up for production, but a project that requires a bunch of hardware just for kicking the tires is going to find its tires un-kicked.

Fortunately, this changed in August’s oVirt 3.1 release, which shipped with an All-in-One installer plugin, but, as a glance at the volume of strikethrough text and UPDATE notices in my post for that release, there were more than a few bumps in the 3.1 road.

In oVirt 3.2, the process has gotten much smoother, and should be as simple as setting up the oVirt repo, installing the right package, and running the install script. Also, there’s now a LiveCD image available that you can burn onto a USB stick, boot a suitable system from, and give oVirt a try without installing anything. The downsides of the LiveCD are its size (2.1GB) and the fact that it doesn’t persist. But, that second bit is one of its virtues, as well. The All in One setup I describe below is one that you can keep around for a while, if that’s what you’re after.

Without further ado, here’s how to get up and running with oVirt on a single machine:

HARDWARE REQUIREMENTS: You need a machine with x86-64 processors with hardware virtualization extensions. This bit is non-negotiable–the KVM hypervisor won’t work without them. Your machine should have at least 4GB of RAM. Virtualization is a RAM-hungry affair, so the more memory, the better. Keep in mind that any VMs you run will need RAM of their own.

It’s possible to run an oVirt in a virtual machine–I’ve taken to testing oVirt on oVirt itself most of the time–but your virtualization host has to be set up for nested KVM for this to work. I’ve written a bit about running oVirt in a VM here.

SOFTWARE REQUIREMENTS: oVirt is developed on Fedora, and any given oVirt release tends to track the most recent Fedora release. For oVirt 3.2, this means Fedora 18. I run oVirt on minimal Fedora configurations, installed from the DVD or the netboot images. With oVirt 3.1, a lot of people ran into trouble installing oVirt on the default LiveCD Fedora media, largely due to conflicts with NetworkManager. When I teseted 3.2 with the With 3.2, the installer script disabled NM on its own, but I had to manually enable sshd (sudo service sshd start && sudo chkconfig sshd on).

A lot of oVirt community members run the project on CentOS or Scientific Linux using packages built by Andrey Gordeev, and official packages for these “el6” distributions are in the works from the oVirt project proper, and should be available soon for oVirt 3.2. I’ve run oVirt on CentOS in the past, but right now I’m using Fedora 18 for all of my oVirt machines, in order to get access to new features like the nested KVM I mentioned earlier.

NETWORK REQUIREMENTS: Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from. If you take the hosts file editing route, the installer script will complain about the hostname–you can safely forge ahead.

CONFIGURE THE REPO: Somewhat confusingly, oVirt 3.1 is already in the Fedora 18 repositories, but due to some packaging issues I’m not fully up-to-speed on, that version of oVirt is missing its web admin console. In any case, we’re installing the latest, 3.2 version of oVirt, and for that we must configure our Fedora 18 system to use the oVirt project’s yum repository.

sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm

SILENCING SELINUX (OPTIONAL): I typically run my systems with SELinux in enforcing mode, but it’s a common source of oVirt issues. Right now, there’s definitely one (now fixed), and maybe two SELinux-related bugs affecting oVirt 3.2. So…

sudo setenforce 0

To make this setting persist across reboots, edit the ‘SELINUX=’ line in  /etc/selinux/config to equal ‘permissive’.

INSTALL THE ALL IN ONE PLUGIN: The package below will pull in everything we need to run oVirt Engine (the management server) as well as turn this management server into a virtualization host.

sudo yum install ovirt-engine-setup-plugin-allinone

RUN THE SETUP SCRIPT: Run the script below and answer all the questions. In almost every case, you can stick to the default answers. Since we’re doing an All in One install, I’ve tacked the relevant argument onto the command below. You can run “engine-setup -h” to check out all available arguments.

One of the questions the installer will ask deals with whether and which system firewall to configure. Fedora 18 now defaults to Firewalld rather than the more familiar iptables. In the handful of tests I’ve done with the 3.2 release code, I’ve had both success and failure configuring Firewalld through the installer. On one machine, throwing SELinux into permissive mode allowed the Firewalld config process to complete, and on another, that workaround didn’t work.

If you choose the iptables route, make sure to disable Firewalld and enable iptables before you run the install script (sudo service firewalld stop && sudo chkconfig firewalld off && sudo service iptables start && sudo chkconfig iptables on).

sudo engine-setup --config-allinone=yes

TO THE ADMIN CONSOLE: When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, click the “Storage” tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to activate it.

Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. I made the “creating VMs” screencast below for oVirt 3.1, but the process hasn’t changed significantly for 3.2:

[youtube:http://www.youtube.com/watch?v=C4gayV6dYK4&HTML5=1%5D

Gluster Rocks the Vote

Rock the Vote needed a way to manage the fast growth of the data handled by its Web-based voter registration application. The organization turned to GlusterFS replicated volumes to allow for filesystem size upgrades on its virtualized hosting infrastructure without incurring downtime.

Over its twenty-one year history, Rock the Vote has registered more than five million young people to vote, and has become a trusted source of information about registering to vote and casting a ballot.

rtv

Since 2009, Rock the Vote has run a Web-based voter registration application, powered by an open source rails application stack called Rocky.

I talked to Lance Albertson, Associate Director of Operations at the Oregon State University Open Source Lab and primary technical systems operation lead for the service, about how they’re using Gluster to provide for the service’s growing storage requirements.

“During a non-election season,” Albertson explained, “the filesystem use and growth is minimal, however during a presidential election season, the growth of the filesystem can be exponential. So with Gluster we’re trying to solve the sudden growth problem we have.”

Rock the Vote’s voter registration application is served from a virtual machine instance running Gentoo Hardened, with a pair of physical servers running CentOS 6 with Gluster 3.3.0 to host voter registration form data. The storage nodes host a replicated GlusterFS volume, which the registration front end accesses via Gluster’s NFS mount support.

The Gluster-backed iteration of the voter registration application started out in September with a 100GB volume, which the team stepped up incrementally to 350GB as usage grew in the period leading up to the election.

Before implementing Gluster for their storage needs, Rock the Vote’s application hosting team was using local storage within their virtual machines to store the voter form data, which made it difficult to expand storage without bringing their VMs down to do so.

The hosting team shifted storage to an HA NFS cluster, but found the implementation fragile and prone to breakage when adding/removing NFS volumes and shares.

“Gluster allowed us more flexibility in how we manage that storage without downtime,” Albertson continued, “Gluster made it easy to add a volume and grow it as we needed.”

Looking ahead to future election seasons, and forthcoming GlusterFS releases, Albertson told me that the Gluster attribute he’s most interested in is limited-downtime upgrades between version 3.3.0 and future Gluster releases. Albertson is also looking forward to the addition of multi-master support in Gluster’s geo-replication capability, an enhancement planned for the upcoming 3.4 version.