Next week in Los Angeles, I’ll be giving a talk at the SCALE 13x conference on oVirt’s new OptaPlanner-powered scheduling adviser.
Martin Sivák wrote a great post about the feature a couple of months ago, but didn’t cover its installation process, which still has a few rough edges.
Read on to learn how to install the optimizer, and start applying fancy probabilistic fu to your oVirt VM launches and migrations.
Atomic hosts are meant to be as slim as possible, with a bare minimum of applications and services built-in, and everything else running in containers. However, what counts as your bare minimum is sure to differ from mine, particularly when we’re running our Atomic hosts in different environments.
For instance, I’m frequently testing and using Atomic hosts on my oVirt installation, where it’s handy to have oVirt’s guest agent running, which provides handy information about what’s going on inside of an oVirt-hosted VM. If you aren’t using oVirt, though, there’s no reason to carry this package around in what’s supposed to be a svelte image.
RDO, the community-oriented OpenStack distribution for CentOS, Fedora, and their kin, is super-easy to get up and running, as a recently posted YouTube video illustrates:
At the end of the process, you’ll have a single-node RDO installation on which you can create VM instances and conduct various experimentation. You can even associate your VMs with floating IP addresses, which connect these instances to the “Public” network that’s auto-configured by the installer.
BUT, that’s where things stop being super-easy, and start being super-confusing. The auto-configured Public network I just mentioned will only allow you to access your VMs from the single RDO machine hosting those VMs. RDO’s installer knows nothing about your specific network environment, so coming up with a more useful single-node OpenStack installation takes some more configuration.
Back in November, I wrote about how to try out Kubernetes, the open source system for managing containerized applications across multiple hosts, using Atomic Hosts. In that post, I walked through a deployment of the Kubernetes project’s multicontainer “Hello World” application.
This time, I thought I’d explore running a more real-world application on Kubernetes, while looking into a few alternate methods of spinning up a Kubernetes cluster.
For the application, I picked Gitlab, an open source code collaboration platform that resembles and works like the popular Github service. I run a Gitlab instance internally here at work, and I wanted to explore moving that application from its current, virtual machine-based home, toward a shiny new containerized future.
This week, Fedora 21 (a.k.a., the release that must not be named) hit FTPs mirrors everywhere, with a feature list led by a new organizational structure for the distribution. Fedora is now organized into three separate flavors: Workstation, Server, and Cloud.
Fedora’s Cloud flavor is further divided into a “traditional” base image for deploying the distribution on your cloud of choice, and an Atomic Host image into which Fedora’s team of cloud wranglers has herded a whole series of futuristic operating system technologies.
Atomic hosts include Kubernetes for orchestration and management of containerized application deployments, across a cluster of container hosts. If you’re interested in taking Kubernetes for a spin on an Atomic host, read on!
Two weeks ago in this space, I wrote about how to deploy the virtualization, storage, and management elements of the new oVirt 3.5 release on a single machine. Today, we’re going to add two more machines to the mix, which will enable us to bring down one machine at a time for maintenance while allowing the rest of the deployment to continue its virtual machine hosting duties uninterrupted.
Over the past several weeks, teams within the CentOS and Fedora projects have been establishing the processes needed to produce “Atomic Host” variants of their respective distributions. If you haven’t already done so, you can check out the latest pre-release Fedora Atomic and CentOS Atomic images.
Now, consuming an OS that partakes in the hotness of atomic system and application management while consisting of pre-built packages from a distribution you already know and trust is the point of Project Atomic, but you still might want to take the reins and produce your own Atomic updates or add new RPMs of your choosing.
If so, you can, without too much trouble, compose and host your own updated or changed atomic updates right from your Atomic host, or from any other sort of Docker host. Here’s how that works:
Last week, version 3.5 of oVirt, the open source virtualization management system, hit FTP mirrors sporting a slate of fixes and enhancements, including a new-look user interface, and support for using CentOS 7 machines as virtualization hosts.
As with every new oVirt release, I’m here to suggest a path to getting up and running with the project on single server, with an option for expanding to additional machines in the future.
I’ve tried out a lot of different software applications in my time, so I’ve come to appreciate projects and products that make it easy to get up and running quickly and without the need for assembling a whole labful of equipment.
In this vein, the various components that comprise oVirt, the open source virtualization management project, can be piled onto a single piece of hardware in form that works well enough to credibly kick the project’s tires.
Well, mostly. In order to really get your oVirt on, you need to hook up a directory service of some sort. FreeIPA is an obvious choice to provide oVirt with directory services, but due to conflicts between their package sets, oVirt’s management engine can’t be installed on the same host as FreeIPA.
Sounds like a job for Docker, right?