When application updates become available it takes the average user a while to apply or install those updates. This simply happens because not every user is constantly checking if all the applications they run have an update available.
OpenAppStack wants to take this responsibility away from the user by automatically testing updates as soon as they become available and subsequently making sure all clusters running OpenAppStack update themselves to the newest version.
This process would look something like this:
- An application gets updated upstream
- We upgrade the application in our test cluster and test if everything still works (including integration with other applications)
- After the tests succeed, the updated version is made available to OpenAppStack users
- Every OpenAppStack cluster automatically downloads the update and applies it without any intervention by the cluster administrator.
Many software products exist already that could help OpenAppStack achieve that goal. This post compares those auto update mechanisms and explains our choice in the matter.
Although we also want to provide automatic updating of the rest of the cluster (Kubernetes, maybe even the base operating system) in the future, this post focuses solely on updating applications running within Kubernetes.
We searched for the possible tools that fit our requirements (more about that in a minute). This article is primarily based on information from the project’s README, website, documentation and people’s opinions in GitHub issues and on Reddit (thank you /r/kubernetes).
We have written a list of requirements that we think are important for OpenAppStack.
A few hard requirements, all of the projects named in this post fit these requirements.
- Open source
- Made for use with Kubernetes
Next we have some soft requirements. Preferably we find something that ticks all these boxes.
How well a project is documented is always a big issue. Poor documentation can cost too much time to figure out how something works, or if it is a supported feature at all. In the table below, we will consider a project well documented if it has documentation pages that go beyond a simple readme. For example on the project website, readthedocs.com or a docs folder in GitHub.
Automatically update applications
This is the primary problem we try to solve. This can be done by automatically updating the Docker container running the application. However, we would also like to be able to make changes to the helm chart or the kubernetes components that helm generates and have those applied to running clusters automatically. Therefore we split this requirement into Auto update Docker containers, Auto update Kubernetes components and Auto update Helm charts.
Because we don’t want to require our developers to have direct access to all Kubernetes clusters (people should be able to run OAS on their own hardware), we want the updates to be pulled from our servers. To achieve this, the update mechanism preferably runs in the cluster, but it could also run as a cron job on a Kubernetes host.
Installed with Helm
If the project runs in the cluster, we prefer to install it with Helm, like the rest of our software.
Backup & rollback
Especially with an automated system, it is important to be able to revert to an older version of the cluster. Most git-based solutions seem to provide this, because it would be possible to go back to an old cluster state by applying a
git revertcommit on that repo.
Keep in mind, however, that applications that are updated by the update mechanism also need to support rollbacks in order for this feature to have the desired effect.
Contains a web UI
Some update mechanisms include a web UI on which information about the updates. Some show logs of the update process, others only show when updates were applied and how long that took.
One reason to value a web UI is that CI processes usually do not have this information when updates are pulled by the cluster.
Exposes a Prometheus endpoint
Alternatively, the update mechanism can expose a Prometheus endpoint, so we can show the update information in our Grafana dashboard.
This section lists the tools we found with a non-extensive internet search. As said before, we only list the tools that are open source and work with Kubernetes. Each tool listing starts with a quote from their readme file on GitHub.
The purpose of Environment Operator is to provide a seamless application deployment capability for a given environment within Kubernetes. It can easily hook into your existing CI/CD pipeline capabilities by installing our Environment Operator Jenkins plugin to interface with environment operator and deploy your services.
In contrast to most alternatives following next, Environment Operator exposes an API in your cluster which you can call to update applications. It also exposes a metrics API.
The most logical case is to call Environment Operator API endpoints from a CI script, instead of automatically from within the cluster, which is why we did not tick the “automatically upgrade X” boxes for this mechanism. However, you could create Jobs to regularly run upgrades.
Flux is a tool that automatically ensures that the state of a cluster matches the config in git. It uses an operator in the cluster to trigger deployments inside Kubernetes, which means you don’t need a separate CD tool. It monitors all relevant image repositories, detects new images, triggers deployments and updates the desired running configuration based on that (and a configurable policy).
Flux developers coined the term GitOps to define what Flux does: all changes to a cluster need to be applied to a git repository. Afterwards, Flux pulls these changes and automatically applies them to the cluster.
Flux runs in the cluster and can be installed with Helm. It includes an optional Helm Operator, which enables flux to update applications installed with helm when new versions of the chart become available.
imagolooks for Kubernetes Deployments and DaemonSets configuration and update them to use the latest image sha256 digest from the docker repository.
Imago automatically updates the docker containers in DaemonSets and Deployments to the latest container version. The rest of the cluster state remains untouched by the program, so if you would want to install a new helm chart, you would have to do that yourself.
If you provide Imago with a Kubernetes config, it can run outside a cluster (for example in a CI pipeline). It is also possible to run it as a Job within a cluster.
Keel is a tool for automating Kubernetes deployment updates. Keel is stateless, robust and lightweight.
Keel can manage the versions of containers running in Deployments, StatefulSets and DaemonSets. You can achieve this by placing annotations in Kubernetes manifests. Keel runs in the cluster and regularly polls docker registries to check for new versions. At first glance Keel looks very well documented. The documentation includes many usage examples.
Keel does not do automatic updates of Kubernetes components or Helm charts based on an external repo. There is an open issue about it, but it seems like the developers made a choice not to go down that road in the future.
Keel exposes a pretty nice looking web UI, but does not include a Prometheus endpoint.
kube-applier is a service that enables continuous deployment of Kubernetes objects by applying declarative configuration files from a Git repository to a Kubernetes cluster.
Kube-applier seems to take a rather simple approach to pull-based updating. Like they say in their readme:
At a specified interval, kube-applier performs a “full run”, issuing
kubectlapply commands for all JSON and YAML files within the repo.
Furthermore, it contains a “Status UI” that shows how the run went. It doesn’t
do much more than this, but simply running
kubectl apply can be very powerful
by itself already. Lastly it exposes a Prometheus endpoint which shares how many
files were applied and how long each apply command took.
We compare all the features of the contenders in a table.
|Well documented||✔||✔||✘||✔||✘ 1|
|Auto update Docker containers||✘||✔||✔||✔||✔|
|Auto update Kubernetes components||✘||✔||✘||✘||✔|
|Auto update Helm charts||✘||✔||✘||✘ 2||✘ 3|
|Pull based||✔||✔||✔ 4||✔||✔|
|Installed with Helm||✘||✔||✘||✔||✘|
|Backup & rollback||✘||✔ 5||✘ 6||✘||✘|
|Web UI||✘ 7||✘||✘||✔||✔|
|Prometheus endpoint||✔ 8||✔ 9||✘||✘||✔|
As you can see in the table, most of our requirements are fulfilled by Flux. Our next step is to install Flux and see if we like how it works. Alternatively, it seems like kube-applier’s simple but powerful approach could help us achieve our goals as well.
We will keep you updated on our experiences with Flux!
It has to be noted that this seems to be a rather small and simple project, so maybe the project README is enough documentation. ↩
You can let keel update helm values when a container is updated, but it doesn’t apply changes to helm charts or values files. ↩
It is possible to “hack” this, by running
helm template, and uploading the generated files to a git repo which gets tracked by kube-applier. ↩
Possible as Kubernetes Job. ↩
Using Helm rollbacks. ↩
Backups & rollbacks aren’t mentioned in the documentation. ↩
Prometheus can be enabled in the helm chart, but it is not 100% clear from the documentation if anything more than the pod status is exposed to prometheus ↩