paywall

Paying for the News

I’ve been paying extra attention to the news these days, because of the election, so I’ve been having lots of interactions with the Washington Post’s “You Have X Free Articles Left This Month” subscription nag screens, and the similar ones from the New York Times. Sometimes, I ridiculously pause before clicking on a link, wondering whether I have free articles left and whether I should click.

When I find myself clicking on links to my hometown San Francisco Chronicle, it’s usually for Giants or Warriors beat reporting, but the Chronicle doesn’t offer any free articles at all.

I agree with the idea of paying for the news, and I’ve considered subscribing to the Washington Post a few times during the election season, but I always ask myself, “why should I subscribe to some East Coast newspaper, when I want to support and consume local news?”

The trouble is, I subscribed to the digital edition of the San Francisco Chronicle for several months last year, and I didn’t like it. I found the local reporting thin, and the rest of it substandard. I liked the sports reporting well enough, but my overall takeaway was: I don’t like this product and I don’t want to pay for it anymore. So I stopped paying for it.

What I’d like is a way to subscribe to a service that’d give me access to multiple newspapers. The service could track which ones I read the most and divvy up the funds appropriately. That way, the pubs with more engaging content would end up with more of my dollars.

One problem with a service like this might be that there’s too little money to go around as it is, and each subscriber would probably end up sending less money to each publication. The key would be bringing in lots of new people, like me, who don’t already subscribe.

I just looked up the annual cost for subscriptions to these five newspapers in which I have some level of interest.

Newspaper Annual Subscription
SF Chronicle $99
Washington Post $99
NY Times $195
LA Times $103
Mercury News $130

These newspapers are each asking around $10 a month for their digital subscriptions. I imagine I’d be willing to pay around three times that for a meta-subscription — give or take, depending on the participating pubs.

Update: I ended up buying one-year subscriptions to the SF Chronicle and to the Washington Post.

always-markdown

WordPress is not delighting me, followup

Followup to my post yesterday about WordPress, me, and insufficient delight.

I mentioned that my editor fonts look crappy. I noticed that as of version 4.6, the dashboard is supposed to take “advantage of the fonts you already have, making it load faster and letting you feel more at home on whatever device you use.” It may be doing that for fonts outside of the HTML editor tab, but for that tab, it isn’t using my chosen monospace font. I mentioned that I could probably fix this with Stylish, and I just did, and life’s a lot better now.

I mentioned that my markdown text is getting converted to HTML, which I really dislike. The WordPress.com account on Twitter kindly replied to tell me that this is a feature, not a bug.

I couldn’t find a mention of this change in any of the past years’ changelogs, and the doc page for WordPress markdown disagrees, but maybe it’s just in need of an update.

I mentioned that the media manager wasn’t surfaced in the new UI, and that that’s how I’d been uploading images to include in my posts. WordPress.com pointed out on Twitter that I can use the Add Media button in the editor…

But, there’s no Add Media button in the HTML tab of the editor, which is where I edit my markdown, which, I guess, will automatically convert at some future point to HTML anyway, so…

However, I realized that I can use the Set Featured Image button in the sidebar to upload an image, copy its URL, uncheck the featured image checkbox, cancel out of the dialog, and then paste that URL into my post, and that works.

Anyway, I let my annual premium subscription auto-renew about a month and a half ago, so I’m out of the refund window, so I’ll probably stick around, although this markdown to HTML autoconvert misfeature is pretty distressing. Worst case scenario, I’m supporting open source software, so there’s that.

WordPress is not delighting me

I’ve switched blog engines from WordPress to Middleman (a static website engine) and back to WordPress, with various other static engine experiments in between.

I switched back to WordPress, on a premium subscription, because WordPress started supporting markdown, which I like, and because WordPress is open source software (with open source comments support), which I also like. What’s more, paying for hosting through Automattic means not having to mess with WordPress updates myself, and means helping to support a legit open source software company, and I’m into both of those, big time.

BUT. I’m not totally delighted with WordPress. It has to do, mostly, with editor issues.

First, fonts in the editor look crappy, and I can’t figure out how to change that. It’s this low-contrast bullshit that you see everywhere these days. I mean, I’m sure I could use something like Stylish to mod the way the fonts in the editor look so that I can actually use it comfortably, but… that shouldn’t be necessary, right?

Fonts are nicer-looking in the “visual” tab, but as I mentioned, I’m writing in markdown. I avoid writing in HTML unless I can’t avoid it. And even clicking into the visual tab tempts the specter of…

Markdown Reverts to HTML.

UGH! I freaking hate when this happens. Here and there, for reasons I don’t understand, and I’ve been too annoyed about the whole thing to patiently test it out, some of the posts I’ve written in markdown transform into HTML:

That’s a screenshot of the revisions feature, that I use to convince myself that I’m not crazy and that I really did write my post in markdown. I can use this feature to revert to before WordPress crappified my text into HTML, but, the revisions feature is only surfaced in the “old” UI, and that old UI is full of nags about using the “new” UI instead.

The new UI is cleaner-looking, but is missing a bunch of stuff, like links to media management, which I need to upload pictures that I want to include in my posts, pictures that I can store in the storage space that I’m paying for as part of my premium subscription. Which brings to mind…

Upsell messages about the business-class service tier. The preview feature used to include little buttons for toggling between web, tablet and mobile site previews, but that screen now includes a fourth button, labelled SEO, and clicking it brings up an ad for the $25 per month business tier of WordPress service.

And of course, customization is a bit of a PITA. WordPress themes are legion, and there isn’t a nice way to filter searches for these by things like theme features supported, so there’s a ton of trial and error involved in finding a theme that’ll work for you.

My big issue has been finding themes with proper support for the “link” post type, and by proper I mean that if I include a link post, I expect the headline to link straight through to the final source, not to a stupid little stub page (I hate it when sites do this) and I want my rss entry to link straight through, too.

The theme I’m using now works this way, but I wish a couple of little things with the site were a bit different, and I know that much is doable via custom CSS, but:

And I’m disheartened to Google for WordPress solutions only to find these sad five year old posts asking similar questions, often unanswered, and when there are answers, they very often involve installing plugins, which you can’t do with hosted WordPress, and which are probably buggy and out of date, anyway.

BUT, whatever, if markdown worked well, and it really ought to, and if the editor got some more love, which… I don’t know, maybe it has gotten love, just not from anyone who actually uses the editor, at least in along with markdown, if these little editing bits were working better, I’d probably be pretty happy, and I imagine I’ll someday beat my CSS demons to figure out the rest.

Installing Kubernetes on CentOS Atomic Host with kubeadm

Version 1.4 of Kubernetes, the open-source system for automating deployment, scaling, and management of containerized applications, included an awesome new tool for bootstrapping clusters: kubeadm.

Using kubeadm is as simple as installing the tool on a set of servers, running kubeadm init to initialize a master for the cluster, and running kubeadm join on some nodes to join them to the cluster. With kubeadm, the kubelet is installed as a regular software package, and the rest of the components run as docker containers.

The tool is available in packaged form for CentOS and for Ubuntu hosts, so I figured I’d avail myself of the package-layering capabilities of CentOS Atomic Host Continuous to install the kubeadm rpm on a few of these hosts to get up and running with an up-to-date and mostly-containerized kubernetes cluster.

However, I hit an issue trying to install one of the dependencies for kubeadm, kubernetes-cni:

# rpm-ostree pkg-add kubelet kubeadm kubectl kubernetes-cni
Checking out tree 060b08b... done

Downloading metadata: [=================================================] 100%
Resolving dependencies... done
Will download: 5 packages (41.1 MB)

Downloading from base: [==============================================] 100%

Downloading from kubernetes: [========================================] 100%

Importing: [============================================================] 100%
Overlaying... error: Unpacking kubernetes-cni-0.3.0.1-0.07a8a2.x86_64: openat: No such file or directory

It turns out that kubernetes-cni installs files to /opt, and rpm-ostree, the hybrid image/package system that underpins atomic hosts, doesn’t allow for this. I managed to work around the issue by rolling my own copy of the kubeadm packages that included a kubernetes-cni package that installed its binaries to /usr/lib/opt, but I found that not only do kubernetes’ network plugins expect to find the cni binaries in /opt, but they place their own binaries in there as needed, too. In an rpm-ostree system, /usr is read-only, so even if I modified the plugins to use /usr/lib/opt, they wouldn’t be able to write to that location.

I worked around this second issue by further modding my kubernetes-cni package to use tmpfiles.d, a service for managing temporary files and runtime directories for daemons, to create symlinks from each of the cni binaries stored in /usr/lib/opt/cni/bin to locations in in /opt/cni/bin. You can see the changes I made to the spec file here and find a package based on these changes in this copr.

I’m not positive that this is the best way to work around the problem, but it allowed me to get up and running with kubeadm on CentOS Atomic Host Continuous. Here’s how to do that:

This first step may or may not be crucial, I added it to my mix on the suggestion of this kubeadm doc page while I was puzzling over why the weave network plugin wasn’t working.

# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

This set of steps adds my copr repo, overlays the needed packages, and kicks off a reboot for the overlay to take effect.

# cat <<EOF > /etc/yum.repos.d/jasonbrooks-kube-release-epel-7.repo
[jasonbrooks-kube-release]
name=Copr repo for kube-release owned by jasonbrooks
baseurl=https://copr-be.cloud.fedoraproject.org/results/jasonbrooks/kube-release/epel-7-x86_64/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/jasonbrooks/kube-release/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1
EOF

# rpm-ostree pkg-add --reboot kubelet kubeadm kubectl kubernetes-cni

These steps start the kubelet service, put selinux into permissive mode, which, according to this doc page should soon not be necessary, and initializes the cluster.

# systemctl enable kubelet.service --now

# setenforce 0

# kubeadm init --use-kubernetes-version "v1.4.3"

This step assigns the master node to also serve as a worker, and then deploys the weave network plugin on the cluster. To add additional workers, use the kubeadm join command provided when the cluster init operation completed.

# kubectl taint nodes --all dedicated-

# kubectl apply -f https://git.io/weave-kube

When the command kubectl get pods --all-namespaces shows that all of your pods are up and running, the cluster is ready for action.

The kubeadm tool is considered an “alpha” right now, but moving forward, this looks like it could be a great way to come up with an up-to-date kube cluster on atomic hosts. I’ll need to figure out whether my workaround to get kubernetes-cni working is sane enough before building a more official centos or fedora package for this, and I want to figure out how to swap out the project-built, debian-based kubernetes component containers with containers provided by centos or fedora, a topic I’ve written a bit about recently.

update: While the hyperkube containers that I’ve written about in the past were based on debian, the containers that kubeadm downloads appear to be built on busybox.

upstream hyperkube, rpm edition

I’ve written recently about running kubernetes in containers on an atomic host. There are a few different ways to do it, but the simplest method involves fetching and running the Debian-based container provided by the upstream kubernetes project.

Debian is awesome, but I’m team RPM — when I run containerized apps, I tend to base them on CentOS or Fedora. If I can run kubernetes itself from an image based on one of those distros, I can save myself some storage and network transfer up front, and set myself up better to understand what’s going on inside the kubernetes containers.

As it turns out, it was pretty easy to mod the Makefile and Dockerfile that generate the containers. I swapped Debian apt-get specific bits for
yum ones, changed the default baseimage to centos:centos7, and removed the gcloud-specifc push command.

The script expects to get a freshly-built copy of the all-in-one hyperkube binary that wraps together all of the kubernetes components from your local system. I modded the Makefile to grab a pre-built (by the kubernetes project) copy of this binary if it doesn’t exist on your machine.

Here’s how to make your own CentOS or Fedora-based kubernetes container, which you can then run using the directions under the heading “Containers from Upstream” from this post:

$ git clone https://github.com/jasonbrooks/kubernetes.git 

$ git checkout hyperkube-rpm 

$ cd kubernetes/cluster/images/hyperkube 

$ make VERSION=v1.3.6

That command would build a CentOS-based hyperkube container, targeting the 1.3.7 release. To build and push to your docker registry, you could use the command:

$ make push VERSION=v1.3.6 REGISTRY="YOUR-DOCKER-REGISTRY"

To build and push a Fedora-based container with the very latest kube beta container, you can bump the VERSION and add a BASEIMAGE argument:

$ make push VERSION=v1.4.0-beta.8 REGISTRY="YOUR-DOCKER-REGISTRY" BASEIMAGE=fedora:24

I tested out v1.3.7 and v1.4.0-beta.8on CentOS Atomic, but hit this
issue
with cAdvisor and cgroups. I clicked the version count back to v1.3.6, and that worked.

New CentOS Atomic Host with Package Layering Support

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (tree version 7.20160818), featuring support for rpm-ostree package layering.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. Check out the CentOS wiki for download links and installation instructions, or read on to learn more about what’s new in this release.

http://www.projectatomic.io/blog/2016/08/new-centos-atomic-host-with-package-layering-support/

Up and Running with oVirt 4 and Gluster Storage

In June, the oVirt Project shipped version 4.0 of its open source virtualization management system. With a new release comes an update to this howto for running oVirt together with Gluster storage using a trio of servers to provide for the system’s virtualization and storage needs, in a configuration that allows you to take one of the three hosts down at a time without disrupting your running VMs.

One of the biggest new elements in this version of the howto is the introduction of gdeploy, an Ansible based deployment tool that was initially written to install GlusterFS clusters, but that’s grown to take on a bunch of complementary tasks. For this process, it’ll save us a bunch of typing and speed things up significantly.

Read more on the oVirt blog

running kubernetes in containers on atomic

The atomic hosts from CentOS and Fedora earn their “atomic” namesake by providing for atomic, image-based system updates via rpm-ostree, and atomic, image-based application updates via docker containers.

This “system” vs “application” division isn’t set in stone, however. There’s room for system components to move across from the somewhat rigid world of ostree commits to the freer-flowing container side.

In particular, the key atomic host components involved in orchestrating containers across multiple hosts, such as flannel, etcd and kubernetes, could run instead in containers, making life simpler for those looking to test out newer or different versions of these components, or to swap them out for alternatives.

Suraj Deshmukh wrote a post recently about running kubernetes in containers. He wanted to test kubernetes 1.3, for which Fedora packages aren’t yet available, so he turned to the upstream kubernetes-on-docker.

Suraj ran into trouble with flannel and etcd, so he ran those from installed rpms. Flannel can be tricky to run as a docker container, because docker’s own configs must be modified to use flannel, so there’s a bit of a chicken-and-egg situation.

One solution is system containers for atomic, which can be run independently from the docker daemon. Giuseppe Scrivano has built example containers for flannel and for etcd, and in this post, I’m describing how to use these system containers alongside a containerized kubernetes on an atomic host.

setting up flannel and etcd

You need a very recent version of the atomic command. I used a pair of CentOS Atomic Hosts running the “continuous” stream.

The master host needs etcd and flannel:

# atomic pull gscrivano/etcd

# atomic pull gscrivano/flannel

# atomic install --system gscrivano/etcd

With etcd running, we can use it to configure flannel:

# export MASTER_IP=YOUR-MASTER-IP

# runc exec gscrivano-etcd etcdctl set /atomic.io/network/config '{"Network":"172.17.0.0/16"}'

# atomic install --name=flannel --set FLANNELD_ETCD_ENDPOINTS=http://$MASTER_IP:2379 --system gscrivano/flannel

The worker node needs flannel as well:

# export MASTER_IP=YOUR-MASTER-IP

# atomic pull gscrivano/flannel

# atomic install --name=flannel --set ETCD_ENDPOINTS=http://$MASTER_IP:2379 --system gscrivano/flannel

On both the master and the worker, we need to make docker use flannel:

# echo "/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker" | runc exec flannel bash

Also on both hosts, we need this docker tweak (because of this):

# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

# sed -i s/MountFlags=slave/MountFlags=/g /etc/systemd/system/docker.service

# systemctl daemon-reload

# systemctl restart docker

On both hosts, some context tweaks to make SELinux happy:

# mkdir -p /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/kubelet/

# chcon -R -t svirt_sandbox_file_t /var/lib/docker/

setting up kube

With flannel and etcd running in system containers, and with docker configured properly, we can start up kubernetes in containers. I’ve pulled the following docker run commands from the docker-multinode scripts in the kubernetes project’s kube-deploy repository.

On the master:

# docker run -d \
--net=host \
--pid=host \
--privileged \
--restart="unless-stopped" \
--name kube_kubelet_$(date | md5sum | cut -c-5) \
-v /sys:/sys:rw \
-v /var/run:/var/run:rw \
-v /run:/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/kubelet:/var/lib/kubelet:shared \
-v /var/log/containers:/var/log/containers:rw \
gcr.io/google_containers/hyperkube-amd64:$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt") \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--hostname-override=${MASTER_IP} \
--v=2

On the worker:

# export WORKER_IP=YOUR-WORKER-IP

# docker run -d \
--net=host \
--pid=host \
--privileged \
--restart="unless-stopped" \
--name kube_kubelet_$(date | md5sum | cut -c-5) \
-v /sys:/sys:rw \
-v /var/run:/var/run:rw \
-v /run:/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/kubelet:/var/lib/kubelet:shared \
-v /var/log/containers:/var/log/containers:rw \
gcr.io/google_containers/hyperkube-amd64:$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt") \
/hyperkube kubelet \
--allow-privileged \
--api-servers=http://${MASTER_IP}:8080 \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--hostname-override=${WORKER_IP} \
--v=2

# docker run -d \
--net=host \
--privileged \
--name kube_proxy_$(date | md5sum | cut -c-5) \
--restart="unless-stopped" \
gcr.io/google_containers/hyperkube-amd64:$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt") \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2

get current kubectl

I usually test things out from the master node, so I’ll download the newest stable kubectl binary to there:

# curl -sSL https://storage.googleapis.com/kubernetes-release/release/$(curl -sSL "https://storage.googleapis.com/kubernetes-release/release/stable.txt")/bin/linux/amd64/kubectl > /usr/local/bin/kubectl

# chmod +x /usr/local/bin/kubectl

test it

It takes a few minutes for all the containers to get up and running. Once they are, you can start running kubernetes apps. I typically test with the guestbookgo atomicapp:

# atomic run projectatomic/guestbookgo-atomicapp

Wait a few minutes, until kubectl get pods tells you that your guestbook and redis pods are running, and then:

# kubectl describe service guestbook | grep NodePort

Visiting the NodePort returned above at either my master or worker IP (these kube scripts configure both to serve as workers) gives me this:

Download and Get Involved with Fedora Atomic 24

This week, the Fedora Project released updated images for its Fedora 24-based Atomic Host. Fedora Atomic Host is a leading edge operating system designed around Kubernetes and Docker containers.

Fedora Atomic Host images are updated roughly every two weeks, rather than on the main six-month Fedora cadence. Because development is moving quickly, only the latest major Fedora release is supported.

Read more on the Project Atomic blog