testing system-containerized kube and friends

A month or so ago I jotted down some notes on using ansible to set up a kubernetes cluster on atomic hosts with kubernetes running in regular docker containers and flannel and etcd running in system containers.

I’ve been working on turning my kube containers into system containers. Three reasons jump to mind:

  • I want to run my kube containers via systemd, and system containers come with systemd unit files rolled in and deployed automatically when you run atomic install --system foo, as opposed to storing them somewhere separate from the containers, and copying them into place.
  • I’m using flannel and etcd system containers, in part because flannel needs to modify docker’s configs to do its thing, and etcd needs to be running for flannel to run, so there’s a bit of a chicken-and-egg situation that we avoid by running flannel and etcd outside of docker. I can save on a bit of storage by having flannel, etcd and kubernetes all share the same image in the ostree-based storage that system containers use.
  • I’ve been wanting to learn more about system containers for a little while now, and Yu Qi (Jerry) Zhang just wrote this system container howto.

I’ve been testing on a trio of fedora atomic hosts like this:

$ git clone https://github.com/jasonbrooks/contrib.git
$ cd contrib
$ git checkout system-containers
$ cd ansible
$ vi inventory/inventory

[masters]
kube-master-test.example.com

[etcd:children]
masters

[nodes]
kube-minion-test-[1:2].example.com

$ cd scripts
$ ./deploy-cluster.sh

Substitute those hostnames above with ones that match your own test machines. Alternatively, you should be able to use the Vagrantfile in the vagrant directory of that repo, though I haven’t tested that yet.

This involves a bunch of changes to run commands like atomic install --system --name etcd {{ container_registry }}/{{ container_namespace }}/etcd:{{ container_label }} to install flannel, etcd and kubernetes master and node components if desired and specified in the inventory/group_vars/all.yml file.

In that same config file, I’ve temporarily turned off some of the newish encrypted flannel stuff, because I need to tweak the flannel container to make it work.

If you run the script as laid out above, you’ll get etcd, flannel and kube containers from my namespace in the docker hub, because the current upstream fedora containers, in the case of etcd and flannel, need a couple of changes, and in the case of kube, the upstream fedora containers (that I maintain) aren’t yet modified to run as system containers.

Speaking of which, another cool thing about system containers is that they can be run as regular docker containers. To test whether my new system containers would run as regular docker containers, I ran through the steps I mentioned in my previous post, with a different branch of ansible modded to run kube in regular docker containers, but in the all.yml conf file, I set container_registry: docker.io and container_namespace: jasonbrooks and container_label: fc25 to grab the system container versions of everything that I’ve been talking about in this post. It worked.

So, yay. I have a couple items to work through still. There’s the flannel bit I mentioned above (I think I just need to mount another dir in the flannel system container’s config.json.template). Also, I’ve been needing to restart the kubelet service again in my nodes before the kubedns pod would work, so I need to track down where in the ansible that needs to happen to make it automatic.

getting stuff done with a local openshift origin instance

A few of the projects I work with use static websites based on middleman, which you can run locally to see how your edits, or those of others, will look on the live site when they’re merged.

Each of these sites defaults to port 4567 when running locally, so if I’m running more than one of them at a time, they complain that their favored port is already taken. It’s easy enough to fire up middleman on a different port, but I thought I’d try and run a couple of these in containers, using a local instance of OpenShift Origin, a Kubernetes-based container application platform.

It’s pretty easy to get up and running with an OpenShift Origin instance using the command oc cluster up. The oc client is available for Linux, Windows and Mac OS. Since containers (pretty much) are Linux, you’ll need a Linux VM on Mac or Windows, but the oc client can use docker machine to take care of that for you. I haven’t tested that, though, because I use Linux already.

On Fedora, I followed these instructions, with the exception of installing the oc client from the Fedora repos (dnf install -y origin-clients), rather than downloading the binary from GitHub.

I wanted my origin install to persist across restarts, so I created a folder in my home directory to store persistent data, and started up my instance with:

$ sudo oc cluster up --host-data-dir=/home/jbrooks/origin-data --use-existing-config

sudo was necessary because I haven’t set up my regular user account to run docker without it — not a big deal, but some config files for logging in to my origin instance as admin ended up in my /root directory instead of my home directory, so I copied those over:

$ sudo cp -r /root/.kube ~/.
$ sudo chown -R jbrooks:jbrooks ~/.kube

I logged into the OpenShift web console using the URL and the developer:developer user name and password output by the oc cluster up command, clicked “Add to Project”, and then, under the “Languages” heading, chose “Ruby,” and then “Ruby 2.3”, because middleman is a ruby affair.

I filled in a name, pasted in the git repository URL for the ovirt middleman site, and hit “Create.”

I headed to the “Overview” page, saw that my build was running, clicked “View Log,” and saw that a familiar-looking build process was chugging along.

When the build finished, OpenShift kicked off a deployment of my image, which I could see from the deployment log linked from the overview page, was erroring out.

After some poking around, I fixed the issue by heading to the deployments section of the web console and, after first pausing the deployment, hitting the edit YAML button. I used the YAML editor to add a command right in between the image and ports sections of the configuration.

I also changed the containerPort from a default of 8080 to the middleman default of 4567. I expected this change to filter down to the service and route that were automatically created for me, but they didn’t — it wasn’t tough to edit those via the web console, however.

I added GIT_COMMITTER_NAME and GIT_COMMITTER_EMAIL environment variables to my deployment, from an “Environment” tab in the deployments area of the console. As I eventually learned, git got grumpy about running as a random UID (as is OpenShift’s security-conscious custom) rather than as a “real” user with an entry in /etc/passwd, but adding those ENV variables calmed git down.

Once I had a pod up and running, I was able to view the development site in my web browser via the URL provided in the routes section of the console.

Next, I headed to my terminal to log into my running pod with OpenShift’s oc rsh command, and fetch and check out a pending pull request on the ovirt site:

$ oc rsh ovirt-site-2-4-50eao

$ git fetch origin pull/877/head:pr-ovirt-gluster-411

$ git checkout pr-ovirt-gluster-411

The middleman development server handles live reloading, so once I checked out the new branch, it refreshed, and I could see my awaiting-merge blog post:

This works, but I’ll probably hone the process some more from here. I experimented a bit with using kompose to put together a simple docker compose-formatted manifest for my app that could either pull from an openshift-built or a built-elsewhere docker container. Like this:

version: "2"

services:  
  ovirt-site:
    image: 172.30.24.24:5000/myproject/ovirt-site
    ports:
      - "4567"
    environment:
      - GIT_COMMITTER_NAME="Jason Brooks"
      - GIT_COMMITTER_EMAIL="jbrooks@redhat.com"
    entrypoint:
      - scl
      - enable
      - rh-ruby23
      - /opt/app-root/src/run-server.sh
    labels:
      kompose.service.type: NodePort

I think that that approach would then work for a regular kube cluster or, with some tweaking, probably, docker or docker swarm as well.

test containerized kube and system container-based flannel and etcd

$ git clone https://github.com/jasonbrooks/contrib.git
$ cd contrib
$ git checkout atomic-update
$ cd ansible
$ vi inventory/inventory

[masters]
kube-master-test.example.com

[etcd:children]
masters

[nodes]
kube-minion-test-[1:2].example.com

$ cd scripts
$ ./deploy-cluster.sh

This will fail (if you use hostnames) at: TASK [flannel : Load the flannel config file into etcd] because we need this PR in the Fedora etcd system container. You can work around by sshing into your master, and editing the resolv.conf inside of your etcd system container to match the host, exiting, and re-running the script.

$ ssh root@kube-master-test.example.com
# vi /var/lib/containers/atomic/etcd/rootfs/etc/resolv.conf
# exit
$ ./deploy-cluster.sh

That should work.

This involves a bunch of changes to use docker containers for kube and use system containers for flannel and etcd. You can specify the registry, namespace and tag to use, as well as whether or not to containerize the master bits, the node bits, the etcd or the flannel using these extra options I’ve added to inventory/group_vars/all.yml:

container_registry: candidate-registry.fedoraproject.org
container_namespace: f25
container_label: latest

containerized_master: true
containerized_node: true

etcd_spc: true
flannel_spc: true

Paying for the News

I’ve been paying extra attention to the news these days, because of the election, so I’ve been having lots of interactions with the Washington Post’s “You Have X Free Articles Left This Month” subscription nag screens, and the similar ones from the New York Times. Sometimes, I ridiculously pause before clicking on a link, wondering whether I have free articles left and whether I should click.

When I find myself clicking on links to my hometown San Francisco Chronicle, it’s usually for Giants or Warriors beat reporting, but the Chronicle doesn’t offer any free articles at all.

I agree with the idea of paying for the news, and I’ve considered subscribing to the Washington Post a few times during the election season, but I always ask myself, “why should I subscribe to some East Coast newspaper, when I want to support and consume local news?”

The trouble is, I subscribed to the digital edition of the San Francisco Chronicle for several months last year, and I didn’t like it. I found the local reporting thin, and the rest of it substandard. I liked the sports reporting well enough, but my overall takeaway was: I don’t like this product and I don’t want to pay for it anymore. So I stopped paying for it.

What I’d like is a way to subscribe to a service that’d give me access to multiple newspapers. The service could track which ones I read the most and divvy up the funds appropriately. That way, the pubs with more engaging content would end up with more of my dollars.

One problem with a service like this might be that there’s too little money to go around as it is, and each subscriber would probably end up sending less money to each publication. The key would be bringing in lots of new people, like me, who don’t already subscribe.

I just looked up the annual cost for subscriptions to these five newspapers in which I have some level of interest.

Newspaper Annual Subscription
SF Chronicle $99
Washington Post $99
NY Times $195
LA Times $103
Mercury News $130

These newspapers are each asking around $10 a month for their digital subscriptions. I imagine I’d be willing to pay around three times that for a meta-subscription — give or take, depending on the participating pubs.

Update: I ended up buying one-year subscriptions to the SF Chronicle and to the Washington Post.

WordPress is not delighting me, followup

Followup to my post yesterday about WordPress, me, and insufficient delight.

I mentioned that my editor fonts look crappy. I noticed that as of version 4.6, the dashboard is supposed to take “advantage of the fonts you already have, making it load faster and letting you feel more at home on whatever device you use.” It may be doing that for fonts outside of the HTML editor tab, but for that tab, it isn’t using my chosen monospace font. I mentioned that I could probably fix this with Stylish, and I just did, and life’s a lot better now.

I mentioned that my markdown text is getting converted to HTML, which I really dislike. The WordPress.com account on Twitter kindly replied to tell me that this is a feature, not a bug.

I couldn’t find a mention of this change in any of the past years’ changelogs, and the doc page for WordPress markdown disagrees, but maybe it’s just in need of an update.

I mentioned that the media manager wasn’t surfaced in the new UI, and that that’s how I’d been uploading images to include in my posts. WordPress.com pointed out on Twitter that I can use the Add Media button in the editor…

But, there’s no Add Media button in the HTML tab of the editor, which is where I edit my markdown, which, I guess, will automatically convert at some future point to HTML anyway, so…

However, I realized that I can use the Set Featured Image button in the sidebar to upload an image, copy its URL, uncheck the featured image checkbox, cancel out of the dialog, and then paste that URL into my post, and that works.

Anyway, I let my annual premium subscription auto-renew about a month and a half ago, so I’m out of the refund window, so I’ll probably stick around, although this markdown to HTML autoconvert misfeature is pretty distressing. Worst case scenario, I’m supporting open source software, so there’s that.

WordPress is not delighting me

I’ve switched blog engines from WordPress to Middleman (a static website engine) and back to WordPress, with various other static engine experiments in between.

I switched back to WordPress, on a premium subscription, because WordPress started supporting markdown, which I like, and because WordPress is open source software (with open source comments support), which I also like. What’s more, paying for hosting through Automattic means not having to mess with WordPress updates myself, and means helping to support a legit open source software company, and I’m into both of those, big time.

BUT. I’m not totally delighted with WordPress. It has to do, mostly, with editor issues.

First, fonts in the editor look crappy, and I can’t figure out how to change that. It’s this low-contrast bullshit that you see everywhere these days. I mean, I’m sure I could use something like Stylish to mod the way the fonts in the editor look so that I can actually use it comfortably, but… that shouldn’t be necessary, right?

Fonts are nicer-looking in the “visual” tab, but as I mentioned, I’m writing in markdown. I avoid writing in HTML unless I can’t avoid it. And even clicking into the visual tab tempts the specter of…

Markdown Reverts to HTML.

UGH! I freaking hate when this happens. Here and there, for reasons I don’t understand, and I’ve been too annoyed about the whole thing to patiently test it out, some of the posts I’ve written in markdown transform into HTML:

That’s a screenshot of the revisions feature, that I use to convince myself that I’m not crazy and that I really did write my post in markdown. I can use this feature to revert to before WordPress crappified my text into HTML, but, the revisions feature is only surfaced in the “old” UI, and that old UI is full of nags about using the “new” UI instead.

The new UI is cleaner-looking, but is missing a bunch of stuff, like links to media management, which I need to upload pictures that I want to include in my posts, pictures that I can store in the storage space that I’m paying for as part of my premium subscription. Which brings to mind…

Upsell messages about the business-class service tier. The preview feature used to include little buttons for toggling between web, tablet and mobile site previews, but that screen now includes a fourth button, labelled SEO, and clicking it brings up an ad for the $25 per month business tier of WordPress service.

And of course, customization is a bit of a PITA. WordPress themes are legion, and there isn’t a nice way to filter searches for these by things like theme features supported, so there’s a ton of trial and error involved in finding a theme that’ll work for you.

My big issue has been finding themes with proper support for the “link” post type, and by proper I mean that if I include a link post, I expect the headline to link straight through to the final source, not to a stupid little stub page (I hate it when sites do this) and I want my rss entry to link straight through, too.

The theme I’m using now works this way, but I wish a couple of little things with the site were a bit different, and I know that much is doable via custom CSS, but:

And I’m disheartened to Google for WordPress solutions only to find these sad five year old posts asking similar questions, often unanswered, and when there are answers, they very often involve installing plugins, which you can’t do with hosted WordPress, and which are probably buggy and out of date, anyway.

BUT, whatever, if markdown worked well, and it really ought to, and if the editor got some more love, which… I don’t know, maybe it has gotten love, just not from anyone who actually uses the editor, at least in along with markdown, if these little editing bits were working better, I’d probably be pretty happy, and I imagine I’ll someday beat my CSS demons to figure out the rest.

Installing Kubernetes on CentOS Atomic Host with kubeadm

Version 1.4 of Kubernetes, the open-source system for automating deployment, scaling, and management of containerized applications, included an awesome new tool for bootstrapping clusters: kubeadm.

Using kubeadm is as simple as installing the tool on a set of servers, running kubeadm init to initialize a master for the cluster, and running kubeadm join on some nodes to join them to the cluster. With kubeadm, the kubelet is installed as a regular software package, and the rest of the components run as docker containers.

The tool is available in packaged form for CentOS and for Ubuntu hosts, so I figured I’d avail myself of the package-layering capabilities of CentOS Atomic Host Continuous to install the kubeadm rpm on a few of these hosts to get up and running with an up-to-date and mostly-containerized kubernetes cluster.

However, I hit an issue trying to install one of the dependencies for kubeadm, kubernetes-cni:

# rpm-ostree pkg-add kubelet kubeadm kubectl kubernetes-cni
Checking out tree 060b08b... done

Downloading metadata: [=================================================] 100%
Resolving dependencies... done
Will download: 5 packages (41.1 MB)

Downloading from base: [==============================================] 100%

Downloading from kubernetes: [========================================] 100%

Importing: [============================================================] 100%
Overlaying... error: Unpacking kubernetes-cni-0.3.0.1-0.07a8a2.x86_64: openat: No such file or directory

It turns out that kubernetes-cni installs files to /opt, and rpm-ostree, the hybrid image/package system that underpins atomic hosts, doesn’t allow for this. I managed to work around the issue by rolling my own copy of the kubeadm packages that included a kubernetes-cni package that installed its binaries to /usr/lib/opt, but I found that not only do kubernetes’ network plugins expect to find the cni binaries in /opt, but they place their own binaries in there as needed, too. In an rpm-ostree system, /usr is read-only, so even if I modified the plugins to use /usr/lib/opt, they wouldn’t be able to write to that location.

I worked around this second issue by further modding my kubernetes-cni package to use tmpfiles.d, a service for managing temporary files and runtime directories for daemons, to create symlinks from each of the cni binaries stored in /usr/lib/opt/cni/bin to locations in in /opt/cni/bin. You can see the changes I made to the spec file here and find a package based on these changes in this copr.

I’m not positive that this is the best way to work around the problem, but it allowed me to get up and running with kubeadm on CentOS Atomic Host Continuous. Here’s how to do that:

This first step may or may not be crucial, I added it to my mix on the suggestion of this kubeadm doc page while I was puzzling over why the weave network plugin wasn’t working.

# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

This set of steps adds my copr repo, overlays the needed packages, and kicks off a reboot for the overlay to take effect.

# cat <<EOF > /etc/yum.repos.d/jasonbrooks-kube-release-epel-7.repo
[jasonbrooks-kube-release]
name=Copr repo for kube-release owned by jasonbrooks
baseurl=https://copr-be.cloud.fedoraproject.org/results/jasonbrooks/kube-release/epel-7-x86_64/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/jasonbrooks/kube-release/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1
EOF

# rpm-ostree pkg-add --reboot kubelet kubeadm kubectl kubernetes-cni

These steps start the kubelet service, put selinux into permissive mode, which, according to this doc page should soon not be necessary, and initializes the cluster.

# systemctl enable kubelet.service --now

# setenforce 0

# kubeadm init --use-kubernetes-version "v1.4.3"

This step assigns the master node to also serve as a worker, and then deploys the weave network plugin on the cluster. To add additional workers, use the kubeadm join command provided when the cluster init operation completed.

# kubectl taint nodes --all dedicated-

# kubectl apply -f https://git.io/weave-kube

When the command kubectl get pods --all-namespaces shows that all of your pods are up and running, the cluster is ready for action.

The kubeadm tool is considered an “alpha” right now, but moving forward, this looks like it could be a great way to come up with an up-to-date kube cluster on atomic hosts. I’ll need to figure out whether my workaround to get kubernetes-cni working is sane enough before building a more official centos or fedora package for this, and I want to figure out how to swap out the project-built, debian-based kubernetes component containers with containers provided by centos or fedora, a topic I’ve written a bit about recently.

update: While the hyperkube containers that I’ve written about in the past were based on debian, the containers that kubeadm downloads appear to be built on busybox.

upstream hyperkube, rpm edition

I’ve written recently about running kubernetes in containers on an atomic host. There are a few different ways to do it, but the simplest method involves fetching and running the Debian-based container provided by the upstream kubernetes project.

Debian is awesome, but I’m team RPM — when I run containerized apps, I tend to base them on CentOS or Fedora. If I can run kubernetes itself from an image based on one of those distros, I can save myself some storage and network transfer up front, and set myself up better to understand what’s going on inside the kubernetes containers.

As it turns out, it was pretty easy to mod the Makefile and Dockerfile that generate the containers. I swapped Debian apt-get specific bits for
yum ones, changed the default baseimage to centos:centos7, and removed the gcloud-specifc push command.

The script expects to get a freshly-built copy of the all-in-one hyperkube binary that wraps together all of the kubernetes components from your local system. I modded the Makefile to grab a pre-built (by the kubernetes project) copy of this binary if it doesn’t exist on your machine.

Here’s how to make your own CentOS or Fedora-based kubernetes container, which you can then run using the directions under the heading “Containers from Upstream” from this post:

$ git clone https://github.com/jasonbrooks/kubernetes.git 

$ git checkout hyperkube-rpm 

$ cd kubernetes/cluster/images/hyperkube 

$ make VERSION=v1.3.6

That command would build a CentOS-based hyperkube container, targeting the 1.3.7 release. To build and push to your docker registry, you could use the command:

$ make push VERSION=v1.3.6 REGISTRY="YOUR-DOCKER-REGISTRY"

To build and push a Fedora-based container with the very latest kube beta container, you can bump the VERSION and add a BASEIMAGE argument:

$ make push VERSION=v1.4.0-beta.8 REGISTRY="YOUR-DOCKER-REGISTRY" BASEIMAGE=fedora:24

I tested out v1.3.7 and v1.4.0-beta.8on CentOS Atomic, but hit this
issue
with cAdvisor and cgroups. I clicked the version count back to v1.3.6, and that worked.

running kubernetes in containers on atomic

The atomic hosts from CentOS and Fedora earn their “atomic” namesake by providing for atomic, image-based system updates via rpm-ostree, and atomic, image-based application updates via docker containers.

This “system” vs “application” division isn’t set in stone, however. There’s room for system components to move across from the somewhat rigid world of ostree commits to the freer-flowing container side.

In particular, the key atomic host components involved in orchestrating containers across multiple hosts, such as flannel, etcd and kubernetes, could run instead in containers, making life simpler for those looking to test out newer or different versions of these components, or to swap them out for alternatives.

Suraj Deshmukh wrote a post recently about running kubernetes in containers. He wanted to test kubernetes 1.3, for which Fedora packages aren’t yet available, so he turned to the upstream kubernetes-on-docker.

Suraj ran into trouble with flannel and etcd, so he ran those from installed rpms. Flannel can be tricky to run as a docker container, because docker’s own configs must be modified to use flannel, so there’s a bit of a chicken-and-egg situation.

One solution is system containers for atomic, which can be run independently from the docker daemon. Giuseppe Scrivano has built example containers for flannel and for etcd, and in this post, I’m describing how to use these system containers alongside a containerized kubernetes on an atomic host.

setting up flannel and etcd

You need a very recent version of the atomic command. I used a pair of CentOS Atomic Hosts running the “continuous” stream.

The master host needs etcd and flannel:

# atomic pull gscrivano/etcd

# atomic pull gscrivano/flannel

# atomic install --system gscrivano/etcd

With etcd running, we can use it to configure flannel:

# export MASTER_IP=YOUR-MASTER-IP

# runc exec gscrivano-etcd etcdctl set /atomic.io/network/config '{"Network":"172.17.0.0/16"}'

# atomic install --name=flannel --set FLANNELD_ETCD_ENDPOINTS=http://$MASTER_IP:2379 --system gscrivano/flannel

The worker node needs flannel as well:

# export MASTER_IP=YOUR-MASTER-IP

# atomic pull gscrivano/flannel

# atomic install --name=flannel --set ETCD_ENDPOINTS=http://$MASTER_IP:2379 --system gscrivano/flannel

On both the master and the worker, we need to make docker use flannel:

# echo "/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker" | runc exec flannel bash

Also on both hosts, we need this docker tweak (because of this):

# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

# sed -i s/MountFlags=slave/MountFlags=/g /etc/systemd/system/docker.service

# systemctl daemon-reload

# systemctl restart docker

On both hosts, some context tweaks to make SELinux happy:

# mkdir -p /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/docker/

setting up kube

With flannel and etcd running in system containers, and with docker configured properly, we can start up kubernetes in containers. I’ve pulled the following docker run commands from the docker-multinode scripts in the kubernetes project’s kube-deploy repository.

On the master:

# docker run -d \
--net=host \
--pid=host \
--privileged \
--restart="unless-stopped" \
--name kube_kubelet_$(date | md5sum | cut -c-5) \
-v /sys:/sys:rw \
-v /var/run:/var/run:rw \
-v /run:/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/kubelet:/var/lib/kubelet:shared \
-v /var/log/containers:/var/log/containers:rw \
gcr.io/google_containers/hyperkube-amd64:$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt") \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--hostname-override=${MASTER_IP} \
--v=2

On the worker:

# export WORKER_IP=YOUR-WORKER-IP

# docker run -d \
--net=host \
--pid=host \
--privileged \
--restart="unless-stopped" \
--name kube_kubelet_$(date | md5sum | cut -c-5) \
-v /sys:/sys:rw \
-v /var/run:/var/run:rw \
-v /run:/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/kubelet:/var/lib/kubelet:shared \
-v /var/log/containers:/var/log/containers:rw \
gcr.io/google_containers/hyperkube-amd64:$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt") \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://${MASTER_IP}:8080 \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--hostname-override=${WORKER_IP} \
--v=2

# docker run -d \
--net=host \
--privileged \
--name kube_proxy_$(date | md5sum | cut -c-5) \
--restart="unless-stopped" \
gcr.io/google_containers/hyperkube-amd64:$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt") \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2

get current kubectl

I usually test things out from the master node, so I’ll download the newest stable kubectl binary to there:

# curl -sSL https://storage.googleapis.com/kubernetes-release/release/$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt")/bin/linux/amd64/kubectl > /usr/local/bin/kubectl

# chmod +x /usr/local/bin/kubectl

test it

It takes a few minutes for all the containers to get up and running. Once they are, you can start running kubernetes apps. I typically test with the guestbookgo atomicapp:

# atomic run projectatomic/guestbookgo-atomicapp

Wait a few minutes, until kubectl get pods tells you that your guestbook and redis pods are running, and then:

# kubectl describe service guestbook | grep NodePort

Visiting the NodePort returned above at either my master or worker IP (these kube scripts configure both to serve as workers) gives me this:

fedora and docker storage

While (pretty much) everyone who’s using docker is running it on Linux, and while lots of people run docker on their laptops and desktops, most aren’t running it directly on Linux desktops and laptops. Instead, most individual docker users are relying on some sort of purpose-built Linux distribution running as a virtual machine on their Mac or Windows machine.

However, if you are (like me) running Linux on your desktop, you can run docker containers right on your bare metal, with no virtualization overhead in between. Yay, Desktop Linux!

But wait. If you are (like me) running Fedora Linux on your desktop, and if you (also like me) weren’t thinking about docker and its particular storage needs when you installed Fedora on your machine, you could be in for some perplexing issues or at least crap performance, because of the way that docker storage works on Fedora.

I’ve written about the general issue elsewhere:

…the AUFS backend that started out as Docker’s default storage option, but never made its way into the mainlain Linux kernel, posed a problem for Red Hat and our upstream first, no out-of-tree bits ways.

The settled-upon solution was device mapper thin provisioning, which takes a block storage device to create a pool of space that can be used to create other block devices for Docker containers and images. The device mapper backend can be configured to use direct LVM volumes or you can let Docker create a pair of loopback mounted sparse files to serve as the block devices.

from: Friends Don’t Let Friends Run Docker on Loopback in Production

When you install Fedora on your desktop or laptop, the installer divvies your entire disk up into a small boot partition and a big LVM partition, and then divides that LVM space up into a swap volume that varies in size based on how much RAM you have installed, a root volume of 50GB, and a home volume that takes over whatever’s left.

With no room left for the pool of space that the docker device mapper storage driver needs for containers and images, the storage driver will turn instead to crappily-performing loopback mounted files. Boo!

A Fix

You can cut back the size of the home volume in Fedora without too much trouble. I like to use system-storage-manager to work with my disks:

NOTE: I guess I should add that whenever you’re mucking with your disks, you should make sure you have backups, and so on, but I have resized my own laptop partitions in just this way on more than one occasion, and I’ve tested the steps written here with a VM as I wrote this, so, yeah.

$ sudo dnf install -y system-storage-manager

Next, reboot your machine, and when you get back to the login screen, hit CTRL-ALT-F2 to get to a virtual terminal, and then log in as root. We need to do this in order to unmount the home directory before we shrink it. As root, you can use system-storage-manager to shrink down the home volume. Below I’m shrinking the home volume to 20G, because I’m testing these instructions on a VM with a 100GB drive. Substitute a value that makes sense for your rig.

# umount /home
# ssm resize -s 20G /dev/fedora/home
# reboot

If you’ve already installed and run docker, you’ll need to delete /var/lib/docker, where all of your containers and volumes live, so be prepared to rebuild those.

$ sudo systemctl stop docker
$ sudo rm -rf /var/lib/docker
$ sudo systemctl start docker

When docker starts up again, a script that comes bundled with Fedora’s distribution of docker will check to see that there’s space available in your volume group and will set up your storage correctly. If you want to grow your home volume later, it’s easy to do and doesn’t require unmounting anything. You’ll run the same ssm resize command from above, and swap in your desired volume size.

NOTE: If you’re using docker-engine from docker.com, check out these docs for setting up devicemapper driver correctly by hand.

Starting out right

If you haven’t yet installed Fedora, you can configure your system to accommodate this and other LVM storage scenarios moving forward by making your home volume smaller and modifying your “fedora” volume group from its default “Automatic” size policy to the “As large as possible” policy. This way, all your spare disk space will be ready for new volumes (such as the docker thin pool) or for growing your home or root volumes if you decide that you need the space later on. This is probably how Fedora partitioning should be configured by default, anyway, but it isn’t.

Looking ahead

Finally, there’s another option on the horizon for docker storage on Fedora, an option that doesn’t require partition changes or planning: OverlayFS. I wrote about this in the post I linked above, too, but the TLDR is that OverlayFS and SELinux don’t work together yet, although that’s set to change. Stay tuned.