Automation With k8s.mk
k8s.mk
can help to create project automation APIs that do stuff with the tool-containers that are described in k8s-tools.yml, and includes lots of general helper targets for working with Kubernetes.
The focus is on simplifying a few categories of frequent challenges:
- Reusable implementations for common cluster automation tasks, like waiting for pods to get ready
- Context-management tasks, (like setting the currently active namespace)
- Interactive debugging tasks, (like shelling into a new or existing pod inside some namespace)
The full API is here, and the Cluster Lifecycle Demo includes a walk-through of using it from your own project automation.
By combining these tools with compose.mk's flux.*
API you can describe workflows, and using the tux.*
API you can send tasks, or groups of tasks, into panes on a TUI.
Automation APIs over Tool Containers
What is an automation API over a tool container anyway?
As an example, let's consider the kubectl.get
target, which you might use like this:
# Usage: kubectl.get/<namespace>/<kind>/<name>/<filter>
$ KUBECONFIG=.. ./k8s.mk kubectl.get/argo-events/svc/webhook-eventsource-svc/.spec.clusterIP
# roughly equivalent to:
$ kubectl get $${kind} $${name} -n $${namespace} -o json | jq -r ..filter.."
The first command has no host requirements for kubectl
or jq
, but uses both via docker.
Similarly, the helm.install
target works as you'd expect but does not require helm
(and plus it's a little more idempotent than using helm
directly). Meanwhile k8s.mk k9s/<namespace>
works like k9s --namespace
does, but doesn't require k9s.
Many of these targets are fairly simple wrappers, but just declaring them accomplishes several things at once.
The typical k8s.mk
entrypoint is:
- CLI friendly, for interactive contexts, as above
- API friendly, for more programmatic use, as part of the prereqs or the body for other project automation
- Workflow friendly, either as part of
make
's native DAG processing, or via flux. - Potentially a TUI element, via the embedded TUI and tux.
- Context-agnostic, generally using tools directly if available or falling back to docker when necessary.
Some targets like k8s.shell
or kubefwd.[start|stop|restart]
are more composite than simple wrappers, and achieve more complex behaviour by orchestrating 1 or more commands across 1 or more containers. See also the ansible wrapper, which exposes a subset of ansible
without all the overhead of inventories & config.
If you want you can always to stream arbitrary commands or scripts into these containers more directly, via the Make/Compose bridge, or write your own targets that run inside those containers. But the point of k8s.mk
is to ignore more of the low-level details more of the time, and start to compose things. For example, here's a one-liner that creates a namespace, adds a label to it, launches a pod there, and shells into it:
$ pod=`uuidgen` \
&& namespace=testing \
&& ./k8s.mk \
k8s.kubens.create/${namespace} \ \
k8s.namespace.label/$${namespace}/mylabel/value
k8s.test_harness/${namespace}/${pod} \
k8s.namespace.wait/${namespace} \
k8s.shell/${namespace}/${pod}
But Why?
There's many reasons why you might want these capabilities if you're working with cluster-lifecycle automation. People tend to have strong opions about this topic, and it's kind of a long story. The short version is this:
- Tool versioning, idempotent operations, & deterministic cluster bootstrapping are all hard problems, but not really the problems we want to be working on.
- IDE-plugins and desktop-distros that offer to manage Kubernetes are hard for developers to standardize on, and tend to resist automation.
- Project-local clusters are much-neglected, but also increasingly important aspects of project testing and overall developer-experience.
- Ansible/Terraform are great, but they have to be versioned themselves, and aren't necessarily a great fit for this type of problem.