getting stuff done with a local openshift origin instance

A few of the projects I work with use static websites based on middleman, which you can run locally to see how your edits, or those of others, will look on the live site when they’re merged.

Each of these sites defaults to port 4567 when running locally, so if I’m running more than one of them at a time, they complain that their favored port is already taken. It’s easy enough to fire up middleman on a different port, but I thought I’d try and run a couple of these in containers, using a local instance of OpenShift Origin, a Kubernetes-based container application platform.

It’s pretty easy to get up and running with an OpenShift Origin instance using the command oc cluster up. The oc client is available for Linux, Windows and Mac OS. Since containers (pretty much) are Linux, you’ll need a Linux VM on Mac or Windows, but the oc client can use docker machine to take care of that for you. I haven’t tested that, though, because I use Linux already.

On Fedora, I followed these instructions, with the exception of installing the oc client from the Fedora repos (dnf install -y origin-clients), rather than downloading the binary from GitHub.

I wanted my origin install to persist across restarts, so I created a folder in my home directory to store persistent data, and started up my instance with:

$ sudo oc cluster up --host-data-dir=/home/jbrooks/origin-data --use-existing-config

sudo was necessary because I haven’t set up my regular user account to run docker without it — not a big deal, but some config files for logging in to my origin instance as admin ended up in my /root directory instead of my home directory, so I copied those over:

$ sudo cp -r /root/.kube ~/.
$ sudo chown -R jbrooks:jbrooks ~/.kube

I logged into the OpenShift web console using the URL and the developer:developer user name and password output by the oc cluster up command, clicked “Add to Project”, and then, under the “Languages” heading, chose “Ruby,” and then “Ruby 2.3”, because middleman is a ruby affair.

I filled in a name, pasted in the git repository URL for the ovirt middleman site, and hit “Create.”

I headed to the “Overview” page, saw that my build was running, clicked “View Log,” and saw that a familiar-looking build process was chugging along.

When the build finished, OpenShift kicked off a deployment of my image, which I could see from the deployment log linked from the overview page, was erroring out.

After some poking around, I fixed the issue by heading to the deployments section of the web console and, after first pausing the deployment, hitting the edit YAML button. I used the YAML editor to add a command right in between the image and ports sections of the configuration.

I also changed the containerPort from a default of 8080 to the middleman default of 4567. I expected this change to filter down to the service and route that were automatically created for me, but they didn’t — it wasn’t tough to edit those via the web console, however.

I added GIT_COMMITTER_NAME and GIT_COMMITTER_EMAIL environment variables to my deployment, from an “Environment” tab in the deployments area of the console. As I eventually learned, git got grumpy about running as a random UID (as is OpenShift’s security-conscious custom) rather than as a “real” user with an entry in /etc/passwd, but adding those ENV variables calmed git down.

Once I had a pod up and running, I was able to view the development site in my web browser via the URL provided in the routes section of the console.

Next, I headed to my terminal to log into my running pod with OpenShift’s oc rsh command, and fetch and check out a pending pull request on the ovirt site:

$ oc rsh ovirt-site-2-4-50eao

$ git fetch origin pull/877/head:pr-ovirt-gluster-411

$ git checkout pr-ovirt-gluster-411

The middleman development server handles live reloading, so once I checked out the new branch, it refreshed, and I could see my awaiting-merge blog post:

This works, but I’ll probably hone the process some more from here. I experimented a bit with using kompose to put together a simple docker compose-formatted manifest for my app that could either pull from an openshift-built or a built-elsewhere docker container. Like this:

version: "2"

services:  
  ovirt-site:
    image: 172.30.24.24:5000/myproject/ovirt-site
    ports:
      - "4567"
    environment:
      - GIT_COMMITTER_NAME="Jason Brooks"
      - GIT_COMMITTER_EMAIL="jbrooks@redhat.com"
    entrypoint:
      - scl
      - enable
      - rh-ruby23
      - /opt/app-root/src/run-server.sh
    labels:
      kompose.service.type: NodePort

I think that that approach would then work for a regular kube cluster or, with some tweaking, probably, docker or docker swarm as well.

stuck, volume 1

So, I’m working my way through the OpenShift Origin BYO PaaS wiki page, but I’m stuck right now near the finish line.

On Saturday, I was cranking through the howto, highlighting and middle-click pasting my way to BYOP nirvana, until I hit an authentication issue when it was time to create a domain on my newly-minted PaaS.

After taking a break for a couple days, I realized that I’d simply forgotten to point my rhc client at the right host — rhc defaults to openshift.redhat.com, and if there’s an account on the Red Hat hosted server with the user name “admin” I can confirm that that user’s password is not “admin” as well.

Cinch. I’d be up and running in no time. Except I hit another issue — my host complained about: “Permission denied – /var/www/stickshift/broker/Gemfile.lock” and there was no such file on my host. With I bit of help from #openshift-dev, I got past the error by running “bundle install” in the broker directory and then chown-ing Gemfile.lock apache:apache.

But I hit another error message: “Failed to authenticate user ‘stickshift’ on db ‘stickshift_broker_dev’.” Word in #openshift-dev is that this is a mongo issue that someone else following the BYO instructions recently encountered as well.

I’ll circle back to this, but for now I’m going to proceed using the OpenShift Origin image I installed from the LiveCD. I’m copying that image from my notebook, where I’ve been running it on KVM using virt-manager, to my newly-assembled oVirt rig (which I mean to blog about soon) using the handy virt-v2c utility. [dang, stuck there, too]

More to come…

[UPDATE 5/20/12]

Another weekend, another shot at the BYOP. I restored my host back to a “fresh install” snapshot, followed all the directions, and am stuck again at the end of the directions. Getting the error: /usr/lib/ruby/gems/1.8/gems/uplift-bind-plugin-0.8.3/lib/uplift-bind-plugin/uplift/bind_plugin.rb:8: uninitialized constant StickShift::DnsService (NameError).

Run OpenShift Origin from LiveCD, and Make it Stick

The OpenShift Origin LiveCD will have you up and running with the code that backs Red Hat’s PaaS in a flash, but installing the LiveCD to your hard drive requires a few workaround steps.

[UPDATE: Check out wiki-fied, updated version of this howto at the OpenShift Origin community site.]

Today, Red Hat delivered on its pledge to open the source code and development process behind its Platform as a Service offering, OpenShift. To help avoid confusion between the Red Hat-hosted service and the open source project and code base, the project is named OpenShift Origin.

The OpenShift Origin source code is available at github.com/openshift, and software packages for Fedora 16 and RHEL 6 are available for download and installation, as well. At this point, though, the fastest way to get and and running with OpenShift Origin is to download this LiveCD image and fire it up on a VM or spare machine.

The LiveCD will boot you straight into a graphical desktop session, based on Fedora, from which you can create a domain and some sample applications. It couldn’t be much easier to use, but as with most LiveCDs, the environment goes away once you reboot. Also, the LiveCD sets you up to interact with any applications you install through the web browser and terminal window in the LiveCD environment. I prefer to use the browser in my regular desktop environment.

Fedora LiveCDs come with a nifty “install to hard drive” option, but in order to install the OpenShift Origin LiveCD to a drive (whether physical or virtual) a couple workaround steps are currently required:

  1. Download LiveCD and boot a VM with it. The project wiki includes instructions for setting up a VM with VirtualBox. I used KVM and virt-manager on my Fedora 17 desktop.
  2. In the terminal window that pops up once the LiveCD has finished booting, type “su” to become the superuser.
  3. The OpenShift Origin environment requires that NetworkManager be disabled, but the system installer requires NetworkManager. Enable NetworkManager by adding the line “NM_CONTROLLED=yes” (no quotes) to your network adapter’s config file. Assuming your network adapter is named eth0, this command ought to do the trick: “echo NM_CONTROLLED=yes >> /etc/sysconfig/network-scripts/ifcfg-eth0”
  4. Restart the network service: “service network restart”
  5. Start the NetworkManager service: “service NetworkManager start”
  6. Start the installer: “liveinst”
  7. Go through text-based install steps, finally rebooting your VM, and logging in as root.
  8. I’m not positive which firewall ports must be open, so for now I’m just disabling the firewall with: “system-configure-firewall-tui”
  9. Run “ifconfig” to figure out the IP address of your VM, and head out to your regular desktop environment to carry out a bit more configuration, and to start using your mini me PaaS installation.
  10. If you’re interested enough in OpenShift to be running OpenShift Origin on one of your own machines, I’m assuming that you’ve already tried out the full-sized, Red Hat-hosted OpenShift service. If so, you’ll want to create a new config file to use with your locally-hosted OpenShift instance, otherwise the rhc client will default to talking to the OpenShift servers off in the clouds.  I created a file called express.conf containing two lines: “default_rhlogin=admin” and “libra_server=YOUR_VM_ADDRESS”.
  11. Next, I created a domain on my OpenShift Origin instance, making sure to append the path to my alternate config file: “rhc domain create -n origin –config=/home/jason/Desktop/express.conf”. When prompted for a password, use “admin”.
  12. Now, you’re ready to install an application. I’m partial to WordPress as a demo app (my blog is powered by WordPress+OpenShift) but if you’d like to try a different app, here’s a big list of easily-deployed quickstarts.
  13. Start following the instructions at the WordPress quickstart, making sure to append your alternate config file like so:
    rhc app create -a wordpress -t php-5.3 --config=/home/jason/Desktop/express.conf
  14. Your OpenShift Origin will create the new PHP app, and then time out trying to resolve its DNS name. Since we’re interacting with our PaaS from outside of the LiveCD environment, we lose the LiveCD’s automatic DNS magic, and have to make things resolve properly on our own. I made things resolve properly by adding a line to my /etc/hosts file, associating my VM address with the my appname-domainname at the example.com domain to which the LiveCD defaults:
    192.168.122.147 wordpress-origin.example.com
  15. The DNS time-out message we received in step 14 includes a git clone command for pulling down your skeleton app structure from your PaaS instance. Run it.
  16. We need to give our wordpress app a mysql database to work with. There’s a command for this in the quickstart, to which we’ll again append our alternate config file:
    rhc app cartridge add -a wordpress -c mysql-5.1 --config=/home/jason/Desktop/express.conf
  17. We’re near the end. Next (and these steps are straight out of the WordPress quickstart) we cd into the app directory, hook up to the wordpress example git repo, pull the code down from there, and then push it up into our OpenShift Origin instance:
    cd wordpress
    git remote add upstream -m master git://github.com/openshift/wordpress-example.git
    git pull -s recursive -X theirs upstream master
    git push
  18. If you’re following along with me, you should now have a shiny new WordPress instance available at http://wordpress-origin.example.com, with your default admin user name and password listed in your terminal window following the “git push.”

So that’s it. You have your very own PaaS instance running on a local VM that won’t go away between reboots.

The open sourcing of OpenShift is a big deal, but the best PaaS is the one you don’t have to operate yourself. That’s why, as the only current downstream project implementing OpenShift Origin, Red Hat’s OpenShift service remains the best place for people to get acquainted with the project. Here’s hoping that not too much time passes before a bunch of rival implementations hit the scene to give Red Hat a run for its money!