Custom Automation APIs
This demo describes one way to very quickly expose custom automation APIs using compose.mk
. But what does that even mean?
When it comes to automation, combining different tools that might be in different containers together is part of the job, but at the end of the day you always want a coherent vocabulary of verbs that operate on your domain.
Those verbs might be the classic build/clean/test, but even then you're often orchestrating across different containers. And your vocabulary might look more like build/deploy, or if things get complicated enough, your vocabulary might develop more object-oriented instructions, like queue.stop and cluster.create. So that's an automation API in broad strokes, and compose.mk
has the tools to build it.
A Concrete Use-Case
For our purposes here, we need a concrete use-case that's pretty small, so we'll talk about hacking Ansible.
The rest of this page is a tutorial about how to do that, but the involvement of ansible specifically is not that important. If you're more interested in patterns, architectures, and theory.. you might prefer to start here instead. If you want to see a similar demo with different base technology, you can see a smaller bridge-building project in the justfile demo, or a larger project with jupyter in the notebooking demo.
This demo doesn't have lots of prerequisites, but it does us jq/jb to parse/create JSON. See the structured IO docs for more background.
Hijacking Ansible
Ansible is neat for lots of stuff, but it comes with a lot of baggage.
The Problem
Depending on how you're using it.. you'll need a playbook, maybe a config file and an inventory file, not to mention a python stack, and you'll need the ansible version itself pinned.
It's a whole stack and it has to be layed out in a certain way; command line invocations are long, tedious, and easy to get wrong. Simple stuff can be a lot of drama, especially if you're trying to make sure it's reproducible. And things gets worse if you need to run it from different contexts, or if you're trying to keep your work small and self-contained.
As an exercise.. let's look at what compose.mk
can bring to adress some of that.
Map Ansible Modules to Targets
Ansible's adhoc mode already gives some direct access to ansible-modules and lets you skip a playbook. For our first trick, we'll try to expose access to those modules. Here are the basic goals:
Goals
- No ansible-dependency on our host (of course we'll use docker)
- Full access to ansible-modules via the command line
- Full access to ansible-modules in
make
world, e.g. as composable target prerequisites - Ensure that output is always JSON, in case we want to use it downstream
Typically, you would want to begin by sketching out an ansible container inside a docker-compose file, then use compose.import
to autogenerate related targets. Just to keep things simple though and to demonstrate raw docker support in compose.mk at the same time, we'll embed the ansible container directly.
#!/usr/bin/env -S make -f
# Building a custom automation API with `compose.mk`.
# Here we build a wrapper around a containerized ansible,
# exposing a new, opinionated interface that is is versioned,
# slim, stateless, and defaults to JSON IO.
#
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/ansible-adhoc.mk
include compose.mk
export ANSIBLE_LOAD_CALLBACK_PLUGINS=true
export ANSIBLE_STDOUT_CALLBACK=json
# Look, it's a container that has ansible & ansible has support for docker.
# Could do this with an external Dockerfile or docker-compose file,
# but for simplicity it's embedded.
define Dockerfile.Ansible
FROM ${IMG_DEBIAN_BASE:-debian:bookworm-slim}
RUN apt-get update -qq && apt-get install -qq -y jq make bash procps
RUN apt-get install -qq -y ansible python3-pip
RUN pip3 install -q docker --break-system-packages
endef
# Let's build a bridge to expose adhoc ansible[1].
# This declares a top-level target `ansible.adhoc` that takes one parameter,
# which is the name of the ansible module to call. We mention the prereq
# target `Dockerfile.build/Ansible`, which ensures that the container
# described above is ready. This target runs on the host, so we don't
# actually have access to ansible here, and so this target dispatches
# to a private target defined at `self.ansible.adhoc/<module_name>`.
#
# [1] https://docs.ansible.com/ansible/latest/command_guide/intro_adhoc.html
ansible.adhoc/%:
img=compose.mk:Ansible \
${make} docker.dispatch/self.ansible.adhoc/${*}
# This target runs inside the ansible base container defined above,
# and calls ansible in a way that ensures JSON output. Optionally,
# you can pass additional ansible in an environment variable.
# Once ansible returns output, we unpack the interesting part of
# the JSON data in the last step before we return it.
self.ansible.adhoc/%:
ansible all -m${*} -i localhost, -c local $${ansible_args:-} \
| ${jq} .plays[0].tasks[0].hosts.localhost
# Based on the work above, we've already got a fairly
# complete map from ansible-modules to make-targets.
# For example `make ansible.adhoc/ping` already works
# like you'd expect to call the ping-module [2].
#
# [2] https://docs.ansible.com/ansible/2.8/modules/ping_module.html
# But some modules take arguments. Let's setup a way to pass those in.
# Ansible accepts JSON or `key=val` style data for this using `--args`.
# The `ansible.adhoc` target is somewhat prepared for this, so we just
# need a way to set stuff in the `ansible_args` environment variable,
# and we need that var passed through to docker.
ansible.adhoc.pipe/%:
export ansible_args="--args `${stream.stdin}`" \
&& env=ansible_args ${make} ansible.adhoc/${*}
# Putting it all together, here's a simple new target for
# the verb 'list_images'. This returns the currently available
# docker images by using Ansible's `docker_image_info` module[3]
# and calling it with the `timeout=60` flag.
#
# [3] https://docs.ansible.com/ansible/2.8/modules/docker_image_info_module.html
list_images:
${jb} timeout=60 \
| ${make} ansible.adhoc.pipe/docker_image_info \
| ${jq} -r '.images[].RepoTags[0]'
# Main entrypoint.
# Exercise a few possibilities for our new custom automation API.
__main__: Dockerfile.build/Ansible ansible.adhoc/ping list_images
The implementation is small, and the result is powerful. As mentioned above, make ansible.adhoc/ping
works as expected to call the ping module, and to call the docker_image_info module with arguments. Output looks like this:
$ ./demos/ansible-adhoc.mk ansible.adhoc/ping
③ ≣ docker.run // img=compose.mk:Ansible
{
"_ansible_no_log": false,
"action": "ping",
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
The argument-passing is somewhat awkward for people, but it supports JSON, so this is already fairly usable from other automation. For common use-cases, you just make a more friendly alias, as is done with the list_images
target above.
Closer Look
The eagle-eyed reader will have noticed that docker_image_info
actually returns data, i.e. it is using the host docker socket, because under the hood, docker.dispatch/<target> shares the socket and the working directory automatically. Effectively the ansible container can orchestrate tasks on the host in many respects, even though the host doesn't have ansible.
Discussion
The examples for this demo might feel contrived, but the technique itself is more interesting than the involvement of ansible. Although.. ansible does have lots of random cool utilities, even if the friction involved to actually use them might feel daunting. So, our new targets now make it easy to pull in a discrete chunk of ansible very quickly without having to "ansible all the things".
As a more practical example, you can imagine raiding the kubernetes.core.k8s module for useful stuff:
# Set your KUBECONFIG first
k8s: ansible.adhoc.args/kubernetes.core.k8s
cluster.purge:
${jb} namespace=testing kind=Deployment delete_all=true \
| make k8s
Importing something like cluster.purge
is what we were aiming at originally: a verby high-level and domain-specific API, with tasks accessible individually or in groups from CLI, or available to programmatically chain together as DAGs.
Where this gets really interesting is larger projects. Suppose for example that your cluster.create
operation is using a terraform container, and you work at a place that mostly writes golang code. Whereas using ansible
previously wasn't a good cultural fit, once you strip away the detail and expose an interface as we've seen here, it doesn't feel like as much of a commitment to a whole ecosystem, and that makes it easier to combine tools.
Although Makefile
isn't anyone's favorite programming language, notice how the hypothetical cluster.purge
target is very effective at reducing friction for new contributions. Focusing just on the custom automation API that we're exposing and now able to compose.. we're talking about one line in one file now, and it's reasonably accessible to devops, engineers, data-science, etc. If you're looking at just the cluster.purge
operation as an action, all you need is the right kubeconfig and otherwise it doesn't care if you're working on a local cluster, a production one, or running from CI. A powerful abstraction, since the list of stacks and topics that are safe to ignore now basically includes yaml, python, pip, docker, and ansible itself too.
Combining compose.mk
with your own toolkit and building a new vocabulary is where it really starts to shine, because really, compose.mk
is a foundation to build on, and it doesn't make assumptions about your domain.