Makeception


This is a pretty extreme example of the capabilities of compose.mk to communicate across matrioshka layers. Consider the following code, which we might call "makeception". It's well-commented, but it takes a minute to wrap your head around, so let's prime the intuition.

Road Map

In this example, we'll use compose.mk to embed 2 container definitions and to build and steer them. What's unusual about this example is that even the build process itself is encapsulated inside the same as file, and rather than jamming bash in the Dockerfile, it runs as a normal task/target. We'll also see some practical examples of how to communicate across matrioshka layers, using the stage-stacks.

The Code


As usual, it's harder to explain what it does than to actually build it. 😵‍💫

Summary
#!/usr/bin/env -S make -f
# demos/matrioshka.mk: 
#   To demonstrate matrioshka-language features, we use `compose.mk`
#   to embed a compose service, that embeds a docker container 
#   description, where the container build and run stages both defer
#   back to the matrioshka. This also demonstrates passing data between
#   container layers using stage-stacks[1].
#
# Part of the `compose.mk` repo. This file runs as part of the test-suite.  
# See the main docs: https://robot-wranglers.github.io/compose.mk/matrioshka
# See also: https://robot-wranglers.github.io/compose.mk/stages
#
# USAGE: ./demos/matrioshka.mk


include compose.mk
export BUILD_TARGET?=none

# Look it's an embedded compose file.  
# This defines services `alice` & `bob`.
define inlined.services
services:
  alice: &base
    hostname: alice
    build:
      context: .
      dockerfile_inline: |
        FROM docker:dind
        RUN apk add -q --update --update --no-cache \
          coreutils build-base bash procps-ng
        COPY . /app
        RUN cd /app && make -f ${MAKEFILE} ${BUILD_TARGET}
    working_dir: /workspace
    environment:
      DOCKER_HOST_WORKSPACE: ${DOCKER_HOST_WORKSPACE:-${PWD}}
    volumes:
      - ${PWD}:/workspace
      - ${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock
  bob:
    <<: *base
    hostname: bob
endef 

# After the inline exists, we can call `compose.import.string`,
# autogenerating target scaffolding for each service.
$(call compose.import.string, def=inlined.services)

# Top level entrypoint, we just kick off provisioning and dispatch stages.
__main__: demo.build demo.dispatch demo.summary

# We'll use the scaffolded targets for `<service_name>.build` 
# which were created by the `compose.import`, and trigger building
# each compose service explicitly.  The embedded compose file is 
# structured so that each service is setup by *also* deferring to 
# this Makefile, but in each case the target is decided just in time.
demo.build: flux.stage.enter/BUILD
    BUILD_TARGET=alice.provision ${make} alice.build 
    BUILD_TARGET=bob.provision ${make} bob.build 

# Having different targets setting up alice and bob means that 
# alice and bob can have very different contents.  Again, the 
# `provision.*` targets run during container build time, not on
# the docker host! This allows us to keep `dockerfile_inline` 
# very small and generic.  Just to differentiate alice and bob, 
# we add different tools.
#
# It's nice that we don't have to prefix every provisioning line with
# "RUN ...", but the main benefit to deferring to `make` for setup 
# like this is the ability to do more complex scripting without hassle.  
alice.provision:
    apk add -q figlet

bob.provision:
    apk add -q ack

# That's it for setup, let's test what's been made just to prove 
# that the provisioning steps worked and the containers are really 
# differentiated.  Here we use the autogenerated scaffolding that 
# was already created by `compose.import.string` to dispatch the 
# same `internal_task` target to run inside alice and bob.
demo.dispatch:  flux.stage.enter/RUN \
    alice.dispatch/internal_task \
    bob.dispatch/internal_task

# The main dispatched target.  Like the provision targets, 
# this also runs inside containers, but it's generic enough 
# to work with alice or bob.  Unlike the provision targets,
# this is part of dispatch, i.e. runs after build is completed.
internal_task:
    echo "Running inside `hostname`"
    echo "Special tools available: `which figlet || which ack`"
    echo "Pushing `hostname` event to RUN stage"
    ${jb} host=`hostname` timestamp="`date`" | ${make} flux.stage.push/RUN

# Summary runs at the end to show a sort of trace
# We explicitly exit the BUILD/RUN stages that were 
# opened elsewhere, which at the same time displays the stack contents.
demo.summary: flux.stage.exit/BUILD flux.stage.exit/RUN

The Output


Abbreviated output looks something like this:

$ ./demos/matrioshka.mk
╔═════════════════════════════════════════════════════╗
║                         RUN                         ║
╚═════════════════════════════════════════════════════╝
Running inside alice
Special tools available: /usr/bin/figlet
Pushing alice event to RUN stage
...
Running inside bob
Special tools available: /usr/bin/ack
Pushing bob event to RUN stage
...
[
  { "stage.entered": "Thu 27 Feb 2025 06:20:17 PM PST" },
  { "host": "alice", "timestamp": "..." },
  { "host": "bob", "timestamp": "..."}
]

Discussion


The "makeception" demo above is mainly just to demonstrate control, but since we're here.. is this actually practical? Well, maybe. To put this in perspective: It's pretty normal to have a Dockerfile with 100-300 lines of fragile "RUN"-prefixed madness that only runs top-to-bottom and has tons of repetitious incantations because code re-use isn't possible. And of course.. people put up with a level of inlining already with bash-inside-Dockerfile because they don't want a 3 line Dockerfile and an external shell script just so they get can get a (bad) version of "reusable subroutines" in plain bash! Looked at this way-- maybe the "really extreme" example of the matrioshka approach here actually isn't that bad? It's also worth pointing out that this is structured enough to be easily extended, like adding tests for the container that's produced, or adding repository-push tasks if and only if tests finish successfully, etc.

The main trouble with this example is that layer-caching does makes it impractical for component-oriented usage (like say, as a widget in the TUI). But this isn't necessarily a problem if you're using it for automating container build/test/push work flows.

Closer Look

The eagle-eyed reader will also notice that this uses docker:dind as a base image, which is ultimately how JSON can move via flux.stage.push, even though tools like jq and jb are not actually installed. If you can't use a base with docker in it, then you'll have to explicitly install jq and jb to use the stack.

Going further, one might also wonder whether the stage-stack could be used for bidirectional communication during the build-phase as well. This is also actually possible, but left as an exercise for the reader. (As a hint, this does require being "outside" of the finished build-phase currently, and then faking it with a docker copy call to concatenate internal / external stacks. There's an open issue about allowing bind-mounts during build though, which would remove the special case and give stack access to both the build/run stages for the container.