Using kubeadm is as simple as installing the tool on a set of servers, running
kubeadm init to initialize a master for the cluster, and running
kubeadm join on some nodes to join them to the cluster. With kubeadm, the kubelet is installed as a regular software package, and the rest of the components run as docker containers.
The tool is available in packaged form for CentOS and for Ubuntu hosts, so I figured I’d avail myself of the package-layering capabilities of CentOS Atomic Host Continuous to install the kubeadm rpm on a few of these hosts to get up and running with an up-to-date and mostly-containerized kubernetes cluster.
However, I hit an issue trying to install one of the dependencies for kubeadm, kubernetes-cni:
# rpm-ostree pkg-add kubelet kubeadm kubectl kubernetes-cni Checking out tree 060b08b... done Downloading metadata: [=================================================] 100% Resolving dependencies... done Will download: 5 packages (41.1 MB) Downloading from base: [==============================================] 100% Downloading from kubernetes: [========================================] 100% Importing: [============================================================] 100% Overlaying... error: Unpacking kubernetes-cni-0.3.0.1-0.07a8a2.x86_64: openat: No such file or directory
It turns out that kubernetes-cni installs files to
/opt, and rpm-ostree, the hybrid image/package system that underpins atomic hosts, doesn’t allow for this. I managed to work around the issue by rolling my own copy of the kubeadm packages that included a kubernetes-cni package that installed its binaries to
/usr/lib/opt, but I found that not only do kubernetes’ network plugins expect to find the cni binaries in
/opt, but they place their own binaries in there as needed, too. In an rpm-ostree system,
/usr is read-only, so even if I modified the plugins to use
/usr/lib/opt, they wouldn’t be able to write to that location.
I worked around this second issue by further modding my kubernetes-cni package to use tmpfiles.d, a service for managing temporary files and runtime directories for daemons, to create symlinks from each of the cni binaries stored in
/usr/lib/opt/cni/bin to locations in in
/opt/cni/bin. You can see the changes I made to the spec file here and find a package based on these changes in this copr.
I’m not positive that this is the best way to work around the problem, but it allowed me to get up and running with kubeadm on CentOS Atomic Host Continuous. Here’s how to do that:
This first step may or may not be crucial, I added it to my mix on the suggestion of this kubeadm doc page while I was puzzling over why the weave network plugin wasn’t working.
# cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
This set of steps adds my copr repo, overlays the needed packages, and kicks off a reboot for the overlay to take effect.
# cat <<EOF > /etc/yum.repos.d/jasonbrooks-kube-release-epel-7.repo [jasonbrooks-kube-release] name=Copr repo for kube-release owned by jasonbrooks baseurl=https://copr-be.cloud.fedoraproject.org/results/jasonbrooks/kube-release/epel-7-x86_64/ type=rpm-md skip_if_unavailable=True gpgcheck=1 gpgkey=https://copr-be.cloud.fedoraproject.org/results/jasonbrooks/kube-release/pubkey.gpg repo_gpgcheck=0 enabled=1 enabled_metadata=1 EOF # rpm-ostree pkg-add --reboot kubelet kubeadm kubectl kubernetes-cni
These steps start the kubelet service, put selinux into permissive mode, which, according to this doc page should soon not be necessary, and initializes the cluster.
# systemctl enable kubelet.service --now # setenforce 0 # kubeadm init --use-kubernetes-version "v1.4.3"
This step assigns the master node to also serve as a worker, and then deploys the weave network plugin on the cluster. To add additional workers, use the kubeadm join command provided when the cluster init operation completed.
# kubectl taint nodes --all dedicated- # kubectl apply -f https://git.io/weave-kube
When the command
kubectl get pods --all-namespaces shows that all of your pods are up and running, the cluster is ready for action.
The kubeadm tool is considered an “alpha” right now, but moving forward, this looks like it could be a great way to come up with an up-to-date kube cluster on atomic hosts. I’ll need to figure out whether my workaround to get kubernetes-cni working is sane enough before building a more official centos or fedora package for this, and I want to figure out how to swap out the project-built, debian-based kubernetes component containers with containers provided by centos or fedora, a topic I’ve written a bit about recently.
update: While the hyperkube containers that I’ve written about in the past were based on debian, the containers that kubeadm downloads appear to be built on busybox.