Installing Kubeadm on Fedora CoreOS

There are many different ways to bring up a Kubernetes cluster, but the simplest option I’ve found for getting up and running with a single or multi-node cluster involves a tool called kubeadm, for which the Kubernetes project maintains good installation and configuration docs.

These docs include directions for hosts running Debian/Ubuntu, RHEL/CentOS and Container Linux, but the host I’m interested in is Fedora CoreOS — the successor project to Container Linux and Fedora Atomic, which is currently available as an experimental preview.

Now, the Container Linux directions do work for Fedora CoreOS. In fact, since these steps simply involve copying binaries and systemd unit files onto the host, they’d likely work for any sort of Linux host.

The Debian/Ubuntu and RHEL/CentOS directions involve deb and rpm software packages, which are maintained by the Kubernetes project. As with software packages more generally, these debs and rpms cut out some of the installation steps, handle dependencies, and offer a mechanism for future updates, so I prefer to use them.

As with Container Linux and Fedora Atomic Host, Fedora CoreOS ships system and library dependencies in a (more or less) immutable image, and is meant to host applications running in containers. However, Fedora CoreOS images are assembled using the tool rpm-ostree, which does allow for additional rpms to be layered atop the the base image.

That’s why, with a little bit of modification, the RHEL/CentOS kubeadm installation steps can be made to work with Fedora CoreOS, too.

The upstream kubeadm installation directions for RHEL/CentOS begin by configuring a yum repository:

sudo tee /etc/yum.repos.d/kubernetes.repo <<EOF

One of the dependencies for the upstream kubelet package is a set of container networking plugins, which the kubernetes project also packages, under the name kubernetes-cni. Unfortunately, their package places these binaries under /opt, which rpm-ostree will not abide. Fedora CoreOS already includes these cni binaries in its base image, but under the name containernetworking-plugins.

I’ve made an alternate version of this package that’s modified to report that it satisfies the kubernetes-cni requirement. I’ve submitted a pull request to get this change included in Fedora’s containernetworking-plugins package — if it gets merged I’ll be able to delete this step. Until then, let’s view this as an opportunity to see rpm-ostree’s facility for replacing specific packages in the base image with alternatives:

sudo rpm-ostree override replace

Next, we’ll use package layering to install the kubelet, kubeadm and kubectl binaries. I’m also installing cri-o here, because that’s the runtime I’m interested in using with kubernetes. I’m tacking an -r onto the end of this command to reboot my host, which is necessary for the replace and install layering operations to take effect.

sudo rpm-ostree install cri-o kubelet kubectl kubeadm -r

Once we’ve installed our layered packages, they’ll be updated alongside the regular image updates for our Fedora CoreOS host. If you’d rather update these packages manually, you need to edit their repo files under /etc/yum.repos.d/ to change enabled=1 to enabled=0.

Since we’re using cri-o as our runtime interface, we need to manually set the correct cgroup driver:

echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" | sudo tee /etc/sysconfig/kubelet

SELinux can be troublesome to configure correctly, and upstream kubeadm docs deal with the issue by throwing SELinux into permissive mode. I’ve found that kubeadm runs quite happily in SELinux enforcing mode, if you pre-create a few directories and set their contexts appropriately:

for i in {/var/lib/etcd,/etc/kubernetes/pki,/etc/kubernetes/pki/etcd,/etc/cni/net.d}; do sudo mkdir -p $i && sudo chcon -Rt svirt_sandbox_file_t $i; done

Also required for cri-o are the following sysctl parameters and kernel modules, which we’ll set and configure to persist across reboots:

sudo modprobe overlay && sudo modprobe br_netfilter

sudo tee /etc/modules-load.d/crio-net.conf <<EOF

sudo tee /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1

sudo sysctl --system

Next, we’ll enable and start cri-o and the kubelet:

sudo systemctl enable --now cri-o && sudo systemctl enable --now kubelet

Finally, we’re ready to initialize our cluster, using kubeadm init. Since I’m using cri-o, I need to add --cri-socket=/var/run/crio/crio.sock and since I’m using flannel for networking, I need to include the --pod-network-cidr argument:

sudo kubeadm init --pod-network-cidr= --cri-socket=/var/run/crio/crio.sock

Once the kubeadm init command completes, we need to follow the directions on the screen to create and populate a .kube config directory:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

As I mentioned earlier, I’m using flannel for networking, which requires a kubectl command to set up:

kubectl apply -f

Also on the networking front, I found during my tests that my Fedora CoreOS host was configuring a address on the cni0 interface, which was conflicting with my flannel networking. I found that if I deleted that address from the device, the cni0 interface would get a new, address that worked for my cluster:

sudo ip addr del dev cni0

If you’re going to run a single all-in-one node, you need to un-taint the master node so it can run pods. If you’re setting up additional nodes, you’ll need to re-run all the steps we’ve gone through above, substituting the final kubeadm init step for the kubeadm join command that’s printed on screen at the end of the init operation. If you’re using cri-o, the additional nodes also need a tacked-on --cri-socket=/var/run/crio/crio.sock argument on the join command.

kubectl taint nodes --all

To make sure that everything is working properly, we can run a “hello world” deployment on our new cluster and expose the resulting pod via a NodePort service:

kubectl create deployment hello --image=nginx

kubectl expose deployment hello --type NodePort --port=80

Finally, we can find out which NodePort was assigned, and use curl to see that the server is up:

$ kubectl get svc

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
hello        NodePort   <none>        80:31967/TCP   3s
kubernetes   ClusterIP       <none>        443/TCP        2m14s

$ curl http://$(hostname):31967

<!DOCTYPE html>
<title>Welcome to nginx!</title>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Trying Out a New Path to Kubernetes: Kubespray

I just came across this Little Guide to Kubernetes Install Options, which covers a few options I’ve heard of, and a few options I haven’t heard of. It doesn’t mention the main way that I deploy Kubernetes, which is through the Ansible scripts from the kubernetes/contrib repository. The post does point to another Ansible-based option, though, and I wondered whether this one, called Kubespray (nee Kargo) would work with Atomic Hosts.

I installed kubespray:

$ sudo pip2 install kubespray

I generated an inventory for a baremetal (actually VMs) cluster with one etcd host / kube master and two nodes:

$ kubespray prepare --nodes node2[ansible_ssh_host=cah-2.osas.lab] node3[ansible_ssh_host=cah-3.osas.lab] --etcds node1[ansible_ssh_host=cah-1.osas.lab] --masters node1[ansible_ssh_host=cah-1.osas.lab]

I deployed the cluster, providing the argument -u root because my ansible host was already set up to access my test VMs as root via ssh key:

$ kubespray deploy -u root

The ansible zoomed by, eventually ending with:

PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
node1 : ok=393 changed=95 unreachable=0 failed=0
node2 : ok=333 changed=76 unreachable=0 failed=0
node3 : ok=303 changed=65 unreachable=0 failed=0

Kubernetes deployed successfuly

I tested the cluster by deploying the guestbook go sample app, as is my custom, and sure enough, everything seemed to be working.

The biggest difference between this installation route and the one I usually take is the source of the containers. Where I typically run CentOS Atomic with Kubernetes rpms from the CentOS project or with containers based on those rpms, and the same with Fedora Atomic and Fedora-based content, the Kubespray installer set me up with container images mostly from CoreOS:

[root@cah-1 ~]# atomic containers list
19d6514ceb1a /hyperkube controlle 2017-08-11 18:54 running docker docker
47bb6f63af38 /pause 2017-08-11 18:54 running docker docker
2102af0a5915 /hyperkube scheduler 2017-08-11 18:54 running docker docker
8af0c87bcfbd /pause 2017-08-11 18:54 running docker docker
c91bf4d9c687 /hyperkube apiserver 2017-08-11 18:54 running docker docker
96bc198022ac /pause 2017-08-11 18:54 running docker docker
e5cedfe5145e calico/node:v1.1.3 start_runit 2017-08-11 18:53 running docker docker
a31b6a04be23 /hyperkube proxy --v 2017-08-11 18:52 running docker docker
877aa10ab6a4 /pause 2017-08-11 18:52 running docker docker
b9f64835b7e5 ./hyperkube kubelet 2017-08-11 18:52 running docker docker
1bab52292b2d /usr/local/bin/etcd 2017-08-11 18:48 running docker docker

It’s not a big deal swapping out one container source for another, however. Fedora and CentOS aren’t providing a hyperkube container, which is what kubespray (and kubeadm, for that matter) look to use, but we could create one for Fedora and CentOS based on the upstream Dockerfile.

testing system-containerized kube and friends

A month or so ago I jotted down some notes on using ansible to set up a kubernetes cluster on atomic hosts with kubernetes running in regular docker containers and flannel and etcd running in system containers.

I’ve been working on turning my kube containers into system containers. Three reasons jump to mind:

  • I want to run my kube containers via systemd, and system containers come with systemd unit files rolled in and deployed automatically when you run atomic install --system foo, as opposed to storing them somewhere separate from the containers, and copying them into place.
  • I’m using flannel and etcd system containers, in part because flannel needs to modify docker’s configs to do its thing, and etcd needs to be running for flannel to run, so there’s a bit of a chicken-and-egg situation that we avoid by running flannel and etcd outside of docker. I can save on a bit of storage by having flannel, etcd and kubernetes all share the same image in the ostree-based storage that system containers use.
  • I’ve been wanting to learn more about system containers for a little while now, and Yu Qi (Jerry) Zhang just wrote this system container howto.

I’ve been testing on a trio of fedora atomic hosts like this:

$ git clone
$ cd contrib
$ git checkout system-containers
$ cd ansible
$ vi inventory/inventory




$ cd scripts
$ ./

Substitute those hostnames above with ones that match your own test machines. Alternatively, you should be able to use the Vagrantfile in the vagrant directory of that repo, though I haven’t tested that yet.

This involves a bunch of changes to run commands like atomic install --system --name etcd {{ container_registry }}/{{ container_namespace }}/etcd:{{ container_label }} to install flannel, etcd and kubernetes master and node components if desired and specified in the inventory/group_vars/all.yml file.

In that same config file, I’ve temporarily turned off some of the newish encrypted flannel stuff, because I need to tweak the flannel container to make it work.

If you run the script as laid out above, you’ll get etcd, flannel and kube containers from my namespace in the docker hub, because the current upstream fedora containers, in the case of etcd and flannel, need a couple of changes, and in the case of kube, the upstream fedora containers (that I maintain) aren’t yet modified to run as system containers.

Speaking of which, another cool thing about system containers is that they can be run as regular docker containers. To test whether my new system containers would run as regular docker containers, I ran through the steps I mentioned in my previous post, with a different branch of ansible modded to run kube in regular docker containers, but in the all.yml conf file, I set container_registry: and container_namespace: jasonbrooks and container_label: fc25 to grab the system container versions of everything that I’ve been talking about in this post. It worked.

So, yay. I have a couple items to work through still. There’s the flannel bit I mentioned above (I think I just need to mount another dir in the flannel system container’s config.json.template). Also, I’ve been needing to restart the kubelet service again in my nodes before the kubedns pod would work, so I need to track down where in the ansible that needs to happen to make it automatic.

getting stuff done with a local openshift origin instance

A few of the projects I work with use static websites based on middleman, which you can run locally to see how your edits, or those of others, will look on the live site when they’re merged.

Each of these sites defaults to port 4567 when running locally, so if I’m running more than one of them at a time, they complain that their favored port is already taken. It’s easy enough to fire up middleman on a different port, but I thought I’d try and run a couple of these in containers, using a local instance of OpenShift Origin, a Kubernetes-based container application platform.

It’s pretty easy to get up and running with an OpenShift Origin instance using the command oc cluster up. The oc client is available for Linux, Windows and Mac OS. Since containers (pretty much) are Linux, you’ll need a Linux VM on Mac or Windows, but the oc client can use docker machine to take care of that for you. I haven’t tested that, though, because I use Linux already.

On Fedora, I followed these instructions, with the exception of installing the oc client from the Fedora repos (dnf install -y origin-clients), rather than downloading the binary from GitHub.

I wanted my origin install to persist across restarts, so I created a folder in my home directory to store persistent data, and started up my instance with:

$ sudo oc cluster up --host-data-dir=/home/jbrooks/origin-data --use-existing-config

sudo was necessary because I haven’t set up my regular user account to run docker without it — not a big deal, but some config files for logging in to my origin instance as admin ended up in my /root directory instead of my home directory, so I copied those over:

$ sudo cp -r /root/.kube ~/.
$ sudo chown -R jbrooks:jbrooks ~/.kube

I logged into the OpenShift web console using the URL and the developer:developer user name and password output by the oc cluster up command, clicked “Add to Project”, and then, under the “Languages” heading, chose “Ruby,” and then “Ruby 2.3”, because middleman is a ruby affair.

I filled in a name, pasted in the git repository URL for the ovirt middleman site, and hit “Create.”

I headed to the “Overview” page, saw that my build was running, clicked “View Log,” and saw that a familiar-looking build process was chugging along.

When the build finished, OpenShift kicked off a deployment of my image, which I could see from the deployment log linked from the overview page, was erroring out.

After some poking around, I fixed the issue by heading to the deployments section of the web console and, after first pausing the deployment, hitting the edit YAML button. I used the YAML editor to add a command right in between the image and ports sections of the configuration.

I also changed the containerPort from a default of 8080 to the middleman default of 4567. I expected this change to filter down to the service and route that were automatically created for me, but they didn’t — it wasn’t tough to edit those via the web console, however.

I added GIT_COMMITTER_NAME and GIT_COMMITTER_EMAIL environment variables to my deployment, from an “Environment” tab in the deployments area of the console. As I eventually learned, git got grumpy about running as a random UID (as is OpenShift’s security-conscious custom) rather than as a “real” user with an entry in /etc/passwd, but adding those ENV variables calmed git down.

Once I had a pod up and running, I was able to view the development site in my web browser via the URL provided in the routes section of the console.

Next, I headed to my terminal to log into my running pod with OpenShift’s oc rsh command, and fetch and check out a pending pull request on the ovirt site:

$ oc rsh ovirt-site-2-4-50eao

$ git fetch origin pull/877/head:pr-ovirt-gluster-411

$ git checkout pr-ovirt-gluster-411

The middleman development server handles live reloading, so once I checked out the new branch, it refreshed, and I could see my awaiting-merge blog post:

This works, but I’ll probably hone the process some more from here. I experimented a bit with using kompose to put together a simple docker compose-formatted manifest for my app that could either pull from an openshift-built or a built-elsewhere docker container. Like this:

version: "2"

      - "4567"
      - GIT_COMMITTER_NAME="Jason Brooks"
      - scl
      - enable
      - rh-ruby23
      - /opt/app-root/src/
      kompose.service.type: NodePort

I think that that approach would then work for a regular kube cluster or, with some tweaking, probably, docker or docker swarm as well.

Installing Kubernetes on CentOS Atomic Host with kubeadm

Version 1.4 of Kubernetes, the open-source system for automating deployment, scaling, and management of containerized applications, included an awesome new tool for bootstrapping clusters: kubeadm.

Using kubeadm is as simple as installing the tool on a set of servers, running kubeadm init to initialize a master for the cluster, and running kubeadm join on some nodes to join them to the cluster. With kubeadm, the kubelet is installed as a regular software package, and the rest of the components run as docker containers.

The tool is available in packaged form for CentOS and for Ubuntu hosts, so I figured I’d avail myself of the package-layering capabilities of CentOS Atomic Host Continuous to install the kubeadm rpm on a few of these hosts to get up and running with an up-to-date and mostly-containerized kubernetes cluster.

However, I hit an issue trying to install one of the dependencies for kubeadm, kubernetes-cni:

# rpm-ostree pkg-add kubelet kubeadm kubectl kubernetes-cni
Checking out tree 060b08b... done

Downloading metadata: [=================================================] 100%
Resolving dependencies... done
Will download: 5 packages (41.1 MB)

Downloading from base: [==============================================] 100%

Downloading from kubernetes: [========================================] 100%

Importing: [============================================================] 100%
Overlaying... error: Unpacking kubernetes-cni- openat: No such file or directory

It turns out that kubernetes-cni installs files to /opt, and rpm-ostree, the hybrid image/package system that underpins atomic hosts, doesn’t allow for this. I managed to work around the issue by rolling my own copy of the kubeadm packages that included a kubernetes-cni package that installed its binaries to /usr/lib/opt, but I found that not only do kubernetes’ network plugins expect to find the cni binaries in /opt, but they place their own binaries in there as needed, too. In an rpm-ostree system, /usr is read-only, so even if I modified the plugins to use /usr/lib/opt, they wouldn’t be able to write to that location.

I worked around this second issue by further modding my kubernetes-cni package to use tmpfiles.d, a service for managing temporary files and runtime directories for daemons, to create symlinks from each of the cni binaries stored in /usr/lib/opt/cni/bin to locations in in /opt/cni/bin. You can see the changes I made to the spec file here and find a package based on these changes in this copr.

I’m not positive that this is the best way to work around the problem, but it allowed me to get up and running with kubeadm on CentOS Atomic Host Continuous. Here’s how to do that:

This first step may or may not be crucial, I added it to my mix on the suggestion of this kubeadm doc page while I was puzzling over why the weave network plugin wasn’t working.

# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

This set of steps adds my copr repo, overlays the needed packages, and kicks off a reboot for the overlay to take effect.

# cat <<EOF > /etc/yum.repos.d/jasonbrooks-kube-release-epel-7.repo
name=Copr repo for kube-release owned by jasonbrooks

# rpm-ostree pkg-add --reboot kubelet kubeadm kubectl kubernetes-cni

These steps start the kubelet service, put selinux into permissive mode, which, according to this doc page should soon not be necessary, and initializes the cluster.

# systemctl enable kubelet.service --now

# setenforce 0

# kubeadm init --use-kubernetes-version "v1.4.3"

This step assigns the master node to also serve as a worker, and then deploys the weave network plugin on the cluster. To add additional workers, use the kubeadm join command provided when the cluster init operation completed.

# kubectl taint nodes --all dedicated-

# kubectl apply -f

When the command kubectl get pods --all-namespaces shows that all of your pods are up and running, the cluster is ready for action.

The kubeadm tool is considered an “alpha” right now, but moving forward, this looks like it could be a great way to come up with an up-to-date kube cluster on atomic hosts. I’ll need to figure out whether my workaround to get kubernetes-cni working is sane enough before building a more official centos or fedora package for this, and I want to figure out how to swap out the project-built, debian-based kubernetes component containers with containers provided by centos or fedora, a topic I’ve written a bit about recently.

update: While the hyperkube containers that I’ve written about in the past were based on debian, the containers that kubeadm downloads appear to be built on busybox.

running kubernetes in containers on atomic

The atomic hosts from CentOS and Fedora earn their “atomic” namesake by providing for atomic, image-based system updates via rpm-ostree, and atomic, image-based application updates via docker containers.

This “system” vs “application” division isn’t set in stone, however. There’s room for system components to move across from the somewhat rigid world of ostree commits to the freer-flowing container side.

In particular, the key atomic host components involved in orchestrating containers across multiple hosts, such as flannel, etcd and kubernetes, could run instead in containers, making life simpler for those looking to test out newer or different versions of these components, or to swap them out for alternatives.

Suraj Deshmukh wrote a post recently about running kubernetes in containers. He wanted to test kubernetes 1.3, for which Fedora packages aren’t yet available, so he turned to the upstream kubernetes-on-docker.

Suraj ran into trouble with flannel and etcd, so he ran those from installed rpms. Flannel can be tricky to run as a docker container, because docker’s own configs must be modified to use flannel, so there’s a bit of a chicken-and-egg situation.

One solution is system containers for atomic, which can be run independently from the docker daemon. Giuseppe Scrivano has built example containers for flannel and for etcd, and in this post, I’m describing how to use these system containers alongside a containerized kubernetes on an atomic host.

setting up flannel and etcd

You need a very recent version of the atomic command. I used a pair of CentOS Atomic Hosts running the “continuous” stream.

The master host needs etcd and flannel:

# atomic pull gscrivano/etcd

# atomic pull gscrivano/flannel

# atomic install --system gscrivano/etcd

With etcd running, we can use it to configure flannel:


# runc exec gscrivano-etcd etcdctl set / '{"Network":""}'

# atomic install --name=flannel --set FLANNELD_ETCD_ENDPOINTS=http://$MASTER_IP:2379 --system gscrivano/flannel

The worker node needs flannel as well:


# atomic pull gscrivano/flannel

# atomic install --name=flannel --set ETCD_ENDPOINTS=http://$MASTER_IP:2379 --system gscrivano/flannel

On both the master and the worker, we need to make docker use flannel:

# echo "/usr/libexec/flannel/ -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker" | runc exec flannel bash

Also on both hosts, we need this docker tweak (because of this):

# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

# sed -i s/MountFlags=slave/MountFlags=/g /etc/systemd/system/docker.service

# systemctl daemon-reload

# systemctl restart docker

On both hosts, some context tweaks to make SELinux happy:

# mkdir -p /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/docker/

setting up kube

With flannel and etcd running in system containers, and with docker configured properly, we can start up kubernetes in containers. I’ve pulled the following docker run commands from the docker-multinode scripts in the kubernetes project’s kube-deploy repository.

On the master:

# docker run -d \
--net=host \
--pid=host \
--privileged \
--restart="unless-stopped" \
--name kube_kubelet_$(date | md5sum | cut -c-5) \
-v /sys:/sys:rw \
-v /var/run:/var/run:rw \
-v /run:/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/kubelet:/var/lib/kubelet:shared \
-v /var/log/containers:/var/log/containers:rw \$(curl -sSL "") \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns= \
--cluster-domain=cluster.local \
--hostname-override=${MASTER_IP} \

On the worker:


# docker run -d \
--net=host \
--pid=host \
--privileged \
--restart="unless-stopped" \
--name kube_kubelet_$(date | md5sum | cut -c-5) \
-v /sys:/sys:rw \
-v /var/run:/var/run:rw \
-v /run:/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/kubelet:/var/lib/kubelet:shared \
-v /var/log/containers:/var/log/containers:rw \$(curl -sSL "") \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://${MASTER_IP}:8080 \
--cluster-dns= \
--cluster-domain=cluster.local \
--hostname-override=${WORKER_IP} \

# docker run -d \
--net=host \
--privileged \
--name kube_proxy_$(date | md5sum | cut -c-5) \
--restart="unless-stopped" \$(curl -sSL "") \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \

get current kubectl

I usually test things out from the master node, so I’ll download the newest stable kubectl binary to there:

# curl -sSL$(curl -sSL "")/bin/linux/amd64/kubectl > /usr/local/bin/kubectl

# chmod +x /usr/local/bin/kubectl

test it

It takes a few minutes for all the containers to get up and running. Once they are, you can start running kubernetes apps. I typically test with the guestbookgo atomicapp:

# atomic run projectatomic/guestbookgo-atomicapp

Wait a few minutes, until kubectl get pods tells you that your guestbook and redis pods are running, and then:

# kubectl describe service guestbook | grep NodePort

Visiting the NodePort returned above at either my master or worker IP (these kube scripts configure both to serve as workers) gives me this: