Container Dispatch
Container dispatch is one of the core features of compose.mk
, and involves binding existing tasks (i.e. make-targets or scripts) to existing docker containers.
Targets might be custom (i.e. project-local), or might be part of the compose.mk API, or might be orchestration that combines all of the above. Containers might be existing images, embedded images managed directly by compose.mk
, or might come from internal or external docker-compose specs.
Later in this page, we'll get into more details about how/when container dispatch is useful, and a typical setup with docker-compose.yml
that supports it. For now, let's start with what the idiom looks like and how it's used.
Dispatch Basics
Dispatch comes in several flavors, but we'll start with the support for docker-compose backed tool-containers. *(For more background, see the docs for target-scaffolding and using compose.import
).
To illustrate the idea, consider the following project Makefile:
#!/usr/bin/env -S make -f
# demos/container-dispatch.mk:
# Demonstrates the container dispatch idiom.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: ./demos/container-dispatch.mk
include compose.mk
# Import all the services in the compose file,
# including the "debian" container, into the root namespace
$(call compose.import, file=demos/data/docker-compose.yml)
# Basic dispatch style: Run `self.demo` target in the debian container
__main__: debian.dispatch/self.demo
# Target that's actually used with dispatch. This runs inside the container.
# Using a prefix like `self` or `.` is just convention, but it helps to show
# this is considered "private", and not intended to be used from the top
self.demo:
source /etc/os-release && printf "$${PRETTY_NAME}\n"
uname -n -v
Calling the top-level demo
target looks something like this:
What just happened?
You could say that what we have so far is a kind of notation where the following are roughly equivalent:
# pithy invocation with compose.mk
$ ./demo
# the verbose alternative invocation
$ docker compose -f docker-compose.yml \
run --entrypoint bash debian -c "make self.demo"
Maybe it's not obvious yet, so it's worth noticing that this is already much cleaner and more composable than the alternative in pure bash, where you get lots of disconnected automation scripts that are littered with lots of docker run ...
statements. In fact, this is pretty similar to agent { docker { .. } }
in Jenkins Pipelines, but it's significantly more portable, and arguably more readable.
Namespace-Style Dispatch
The previous example uses the simplest dispatch style, but we want to stay organized even if we're building lots of targets with lots of containers across maybe several compose-files.
In such cases name-collisions might be inevitable, and you'll want fine-grained controls over namespacing to get something closer to an "absolute path" for the dispatch mechanics. The next example is equivalent to the first, and begins to shed light on how the compose.import
macro arguments can be used to create user-defined syntactic sugar.
#!/usr/bin/env -S make -f
# demos/container-dispatch-2.mk:
# Demonstrates the container dispatch idiom using "namespace" style invocation.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: ./demos/container-dispatch-2.mk
include compose.mk
# Import the whole compose file (including "debian" container)
$(call compose.import, file=demos/data/docker-compose.yml namespace=▰)
# Namespaced dispatch style:
# This uses namespacing syntax that was configured as part
# of the `compose.import` call, and you can think of it as
# a more explicit "absolute path" to dispatch. To avoid
# collisions, you can use this to structure calls to
# multiple containers in multiple files.
__main__: ▰/debian/self.demo
self.demo:
source /etc/os-release && printf "$${PRETTY_NAME}\n"
uname -n -v
Similar to "private" targets with a self.*
naming convention, using a blocky unicode symbol like ▰
isn't actually required, and of course you can call your namespace anything you want. There's a few reasons this weird looking style is a good idea though, and docs and demos will stick to it as a convenient naming convention.
- Sugar for
<svc_name>.dispatch/<dispatch_target>
should be shortened (not longer than the syntax it's replacing) - Sugar should be visually very easy to parse, and blocky unicode symbols qualify while also making it very clear this is a specific new kind of idiom.
This convention also has a more subtle property that it's hard to use from the command line, but easy to use from file that already has several such instances, and sometimes this is a feature.
If your targets that actually run inside containers are frequently used from the top-level.. that's a good indication that it's actually a project-level verb that's important enough to deserve an aliased entrypoint.
Toolboxes & Multiple Compose Files
Putting it together, suppose your project involves so many tool containers that it is useful to separate them into different "tool boxes" by using different compose files. By using compose.import.as
, we can import to a namespace without cluttering the root.
#!/usr/bin/env -S make -f
# demos/container-dispatch-3.mk:
# Demonstrates the container dispatch idiom using "namespace" style invocation.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: ./demos/container-dispatch-3.mk
include compose.mk
$(call compose.import.as, namespace=▰ \
file=demos/data/docker-compose.build-tools.yml)
$(call compose.import.as, namespace=🜹 \
file=demos/data/docker-compose.docs-tools.yml)
__main__: build docs
build: ▰/golang/self.code.gen
self.code.gen:
echo "pretending to do stuff with golang"
docs: 🜹/latex/self.docs.gen
self.docs.gen:
echo "pretending to do stuff with LaTeX"
This way of arranging things makes it clear that there are a few distinct toolboxes at play, and that while you can always call things like self.docs.gen
directly for testing, the syntax helps make it clear that your host isn't actually guaranteed to have all the tools involved.
One-to-Many Dispatch
Let's add another target to our project Makefile, demonstrating dispatching one target to multiple containers:
#!/usr/bin/env -S make -f
# demos/double-dispatch.mk:
# Demonstrates the container dispatch idiom using "namespace" style invocation.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: ./demos/double-dispatch.mk
include compose.mk
$(call compose.import,file=demos/data/docker-compose.yml namespace=▰)
# User-facing top-level default target, with two dependencies
__main__: ▰/debian/self.demo ▰/alpine/self.demo
# Displays platform info to show where target is running.
# Since this target is intended to be private, we will
# prefix "self" to indicate it should not run on host.
self.demo:
source /etc/os-release && printf "$${PRETTY_NAME}\n"
uname -n -v
Again, the self
prefix and the ▰/
namespace is just a convention, you can see discussion in the previous section. The above looks pretty tidy though, and hopefully this helps to illustrate how the target / container / callback association works. Running demo.double.dispatch
looks like this:
Meanwhile, the equivalent-but-expanded pure bash version below is getting cluttered. Eagle-eyed readers will note that even the verbose version is actually abbreviated, because this doesn't mention the volume-mount necessary for tool containers to get at project files.
# Clean, simple invocation with compose.mk
$ ./demos/double-dispatch.mk
# Verbose, fragile alternative
$ docker compose -f docker-compose.yml \
run --entrypoint bash debian -c "make self.demo" \
&& docker compose -f docker-compose.yml \
run --entrypoint bash alpine -c "make self.demo"
Container-Agnostic Dispatch
Sometimes you want a target or a script that always runs in a container, but can be safely called from the container or the host. See the tests below for usage information about the compose.bind.target
macro:
#!/usr/bin/env -S make -f
# Demonstrating idiom for container-agnostic target dispatch,
# where the target always run from the container, but does not
# care where it is called from.
#
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# See also: http://robot-wranglers.github.io/compose.mk/container-dispatch
#
# USAGE: ./demos/bind-target-2.mk
include compose.mk
$(call compose.import, file=demos/data/docker-compose.yml)
# Decorator-style idiom.
# This binds together a container, a "public" target,
# and a "private" target. The public-target becomes the main interface;
# the "private" target runs inside the named container, and has the same
# name but uses a "." prefix.
my_target:; $(call compose.bind.target, debian)
self.my_target:
echo hello container `hostname`
# Test that target still works, no matter where it's called from.
__main__:
${make} my_target | grep "hello container"
${make} debian.dispatch/my_target | grep "hello container"
To avoid name-collisions use "." as prefix for the the "private" target, as seen in the example above.
Sometimes you want a script that always runs in a container, but can be safely called from the container or the host. See the tests below for usage information about the compose.bind.script
macro:
#!/usr/bin/env -S make -f
# Demonstrating idioms for container-agnostic script dispatch with
# compose-file backed tool containers. The target script always runs
# from the container, but does not care whether it's called from the host,
# or from inside the container.
#
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# See also: http://robot-wranglers.github.io/compose.mk/container-dispatch
#
# USAGE: ./demos/bind-script.mk
include compose.mk
$(call compose.import, file=demos/data/docker-compose.yml)
export var1=shared
export var2=data
# Decorator-style idiom.
# This binds together a script, a target, and a container,
# using the given target name as the public interface.
# Typically the define-name and target-name will be the same.
# It also shares a subset of the available environment variables
script.sh:; $(call compose.bind.script, svc=alpine env='var1 var2' quiet=1)
define script.sh
echo hello container `hostname`
echo testing environment variables: ${var1} ${var2}
endef
# Test that target still works, no matter where it's called from.
__main__:
${make} script.sh
${make} debian.dispatch/script.sh | grep 'hello container'
Other Kinds of Dispatch
So far we've just looked at using dispatch with imported tool containers that come from docker compose files. Script and target dispatch comes in other flavors too though, from scripts or from the CLI, and works with:
- External compose files (as seen above) 🍦
- Inlined compose files 🍰
- Stock-images 🍭
- Inlined containers 🍓
- The docker host itself 🍒
Whether you're using compose.mk
as a library, a stand-alone tool, or a framework, this simple pattern for dispatching targets in containers is one of the main features, and it's surprisingly powerful.
From here, if you just want to get started, you might want to look at more detailed docs for compose.import
arguments, or the docker-compose.yml boilerplate that supports dispatch.
If you're still wondering why you should care, check out the commentary at the end of this page.
If you're using dispatch idioms a lot and interested in more advanced topics, you might like to see how CMK lang cleans up the syntax.
Compose File Boilerplate
In practice the containers you use might be already compatible with the dispatch idiom, but if they are slim or you are starting from scratch, perhaps not.
Below you can see a typical example of a compose file that supports dispatch, leaning more towards pedantic than minimal. See the integration docs for more detailed discussion,
##
# demos/data/docker-compose.yml:
# Typical compose file with container-bases that are preconfigured to
# work with target dispatch, default workspaces, volume sharing for
# tools, yaml inheritance, etc
services:
debian: &base
hostname: debian
entrypoint: bash
working_dir: /workspace
volumes:
# optional, but required for docker-in-docker
- ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
# standard, so tools can access project files
- ${PWD}:/workspace
build:
context: .
dockerfile_inline: |
FROM ${IMG_DEBIAN_BASE:-debian:bookworm}
RUN apt-get update -qq && apt-get install -qq -y make procps
ubuntu:
<<: *base
hostname: ubuntu
build:
context: .
dockerfile_inline: |
FROM ${IMG_UBUNTU_BASE:-ubuntu:noble}
RUN apt-get update -qq && apt-get install -y -qq make procps
alpine:
<<: *base
hostname: alpine
build:
context: .
dockerfile_inline: |
FROM ${IMG_ALPINE_BASE:-alpine:3.21.2}
RUN apk add -q --update --no-cache coreutils build-base bash procps-ng
But Why? 🤔
Having introduced what the dispatch idiom looks like and what it does, let's talk briefly about what it means.
This is probably a jailbreak moment for your automation! Think about it: Most of the ways that we use to decouple tasks from their execution environment actually cause our tasks to be tightly coupled to some platform instead. That's a frustrating trade-off and a huge barrier to entry if you're just trying to write/run code. And it's not just CI/CD platforms like Jenkins and Github-- it also happens with platforms like Airflow, ArgoWF, and notebooks. These platforms are all great at what they do, but we shouldn't need them every time we just want to decouple tasks from runtimes or express DAGs. And if you want to use a platform later down the line, then your platform will be as happy to run make
as your laptop is.
Another thing that we can optionally decouple from is excessive use of external repos and registries for small customizations. It's always a pain to get pulled into a loop like visit a separate tool-container-repository, make a change and ship an update, back to the other repository, pull the tool down, see it still needs changes,..
Besides decoupling, arranging tasks this way tends to promote reusability and composition. As a rule, the more your automation is isolated into weird silos like notebooks and Jenkinsfiles, the more unruly that automation becomes. Is your automation growing wild copy-pasta, and turning into an ugly monolithic heap that only runs to top-to-bottom and has lots of side-effects? Much nicer to have discrete tasks that you can still run individually, yet quickly compose into DAGs of tasks, or DAGs of DAGs. When you're more free from platforms and actually iterating on code in a read/write/run cycle more often, changes are easier and testing happens more continuously.