Library Overview
Roadmap
Because make
has never really had a standard library1, there's lots of pretty basic stuff that is conspicuously missing. This includes things like logging and colored output, but also the kind of online help that can list and inspect automation that's already been defined.
This page has a tutorial-style introduction to some of the more interesting capabilities. (If you're looking for something more like reference material, the full target API is generated from source.)
In library-mode, compose.mk
is used as an include
from your project Makefile (see the installation docs for more details). After including compose.mk
, you'll have access to a a library of make-targets and macros. In a few cases, compose.mk
supports "scaffolded" target that are dynamic, but that is documented elsewhere. Code style considerations for both the internal library and external extensions is documented somewhat here
Most of what compose.mk
offers is in the form of reusable static targets, which you can directly use from the command line, or use as part of normal tasks/prereqs inside your project Makefile.
Module Layout
Here's an overview for how the compose.mk
standard library is organized into modules: you can dive into the module or target-level documentation anywhere that looks interesting. If you prefer a more guided tour, check out the rest of this page for a few of the random highlights.
Module Layout
flux.*
targets:- A tiny but powerful workflow/pipelining API, roughly comparable to something like declarative pipelines in Jenkins1, providing concurrency, retry operators, etc. You can also think of it as "target algebra" for
make
, similar to how bash provides process algebra. tux.*
targets:- A control-surface for a tmux-backed console geometry manager. See the embedded TUI docs for a higher level overview and demos.
stream.*
targets:- Support for working with streams, including newline/comma/space delimited streams, common use cases with JSON, etc. Everything here is used with pipes, and reads from stdin. It's not what you'd call "typed", but it reduces error-prone parsing and moves a little bit closer to structured data. See some examples here.
io.*
targets:- Misc. utilities for printing, formatting, timers, files, etc. Related topics include file and stream previews.
docker.*
targets:- An small interface for working with docker. (See also the docs for raw docker support.)
mk.*
targets:- Meta-tooling for 'make' itself. This enables help functions, signals and supervisors, some utilities for reflection, etc. See also the reflection docs, the packaging demo, etc.
compose.*
targets:- A small interface for working with docker compose.
Structured IO
An annoying lack of structured output options for make
is something that it has in common with lots of other8 classic unix tools. And if you're working with Makefiles, then you're probably sad about the missing support for keyword-args.
Part of the explanation for this is just that a Makefile might be using any random tools with any kind of output format, but that's sort of inevitable for automation. The other part is just a missing approach to idioms and tools, and compose.mk
can help with that.
The main benefit of the approach that compose.mk
takes to structured data comes from exposing tools like jq
, jb
, and yq
without explicitly depending on them. How does that work? Basically compose.mk
will use local tools if they are available, and fall back to using dockerized versions of them if necessary. Other than that, compose.mk
tries to stay out of the way of jq
and jb
and let them do what they do.
Structured IO: Basics
Before we get into code examples, let's introduce basic principles with the command line.
The jb
3, jq
4, and yq
5 tools are so much revered, they are wrapped as top-level entrypoints.
# Output JSON using `jb`
$ echo key=val | ./compose.mk jb
{"key":"val"}
# Parse JSON using `jq`
$ echo key=val | ./compose.mk jb | ./compose.mk jq -r .key
val
# Alternate approach, outputting JSON without input-pipes.
# Fine for simple stuff, not suitable for more complicated use cases.
$ ./compose.mk jb key=val foo=bar
{"key":"val"}
Wrapped top-level entrypoints are documented in more detail as part of tool mode.
Inside a project Makefile, we can always use direct calls to compose.mk
as seen above, or recursive calls to make
. This is often good enough, but if arguments to these tools get complicated then quoting could get sketchy, or determining a path to compose.mk
could be tricky, etc.
Luckily there are equivalent forms that are more idiomatic, still use local tools if available and fall back to docker if they are missing, and spawn fewer processes:
#!/usr/bin/env -S make -f
# Demonstrates target input/output using JSON.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/structured-io.mk
include compose.mk
# Emit/consume JSON with `compose.mk` macros.
# Uses native tools if they are available, falling back to docker.
emit:
${jb} key=val
consume:
${stream.stdin} | ${jq} .key
# Exercise the pipeline
__main__:
${make} emit | ${make} consume
Now make emit | make consume
is doing basic IO with JSON, and we haven't added any new dependencies. To actually get assignments for input variables in a way that's more automatic, see also the parsing-arguments section which builds on this example.
Besides using JSON for structured output somewhere at the end of a pipeline for output, you can use it internally for target input as well. In terms of style, it is certainly unusual for targets to use pipes like this, but it's definitely worthwhile for, well, structure. (The alternative to this is environment variables, and then you have to worry about variable scopes, so statelessness here is a major advantage.)
Since you could also say that this goes part of the way towards emulating "keyword arguments", it also has the benefit of generalizing targets as something closer to "functions".
From here you'll probably want to provide user-feedback and status messages without wrecking pipe-safety, so see also the section on logging facilities. For a look at structured IO in action, you also might like to check out the workflow demos.
Structured IO: Stream Types & Pipes
The compose.mk::stream
module, aka the stream.*
target namespace is full of targets that can help to handle common tasks with streams.
Such targets work fine in stand-alone mode. Inside a project Makefile, we can always use direct calls to compose.mk
, or recursive calls to make
, but most targets are also available as macros. Using macros looks tidy and is easier to type, but the main benefit is as an optimization, because this removes the need for extra processes.
As a really simple example, targets that have to deal with streams will begin with cat /dev/stdin
to capture input and start a pipeline, but this can be rewritten as simply ${stream.stdin}
for readability.
#!/usr/bin/env -S make -f
# Demonstrating a few macros for stream io. Most of these are also available as targets.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/stdlib-stream-io.mk
include compose.mk
__main__:
$(call log.test, preview results mid-pipe on stderr leaving stdout uninterrupted)
ls README.md | ${stream.peek} > .tmp.ls.out && rm .tmp.ls.out
ls README.md | ${stream.as.log}
$(call log.test, preview markdown with glow )
cat README.md | ${stream.markdown}
cat README.md | ${stream.glow}
$(call log.test, preview an img with chafa )
cat docs/img/icon.png | ${stream.img}
cat docs/img/icon.png | ${stream.chafa}
$(call log.test, convert spaces to new-lines)
echo foo bar | ${stream.space.to.nl}
$(call log.test, convert new-lines to spaces)
printf 'foo bar' | ${stream.nl.to.space}
$(call log.test, convert new-lines to commas)
printf 'foo\nbar' | ${stream.nl.to.comma}
$(call log.test, add line numbers to input)
printf 'foo\nbar' | ${stream.nl.enum}
$(call log.test, syntax highlight yaml)
printf 'foo: bar' | ${stream.yaml.pygmentize}
$(call log.test, syntax highlight csv)
printf 'foo, bar' | ${stream.csv.pygmentize}
$(call log.test, fold long lines)
ls docs/*.md.j2 | width=45 ${stream.fold}
$(call log.test, chaining streams works the way you'd expect )
cat README.md | ${stream.glow} | ${stream.indent}
The above isn't exhaustive; see the full stream API for other helpers.
Argument Parsing
Various approaches to argument-parsing are supported by compose.mk
, including some support for keyword-arguments. This section is not necessarily related to CLI parsing (see the signals and supervisors docs for that). Argument-parsing here refers mainly to structured IO between targets, i.e. targets-as-functions.
There are 3 ways to pass data to targets, and compose.mk
provides helpers for each:
- Use parametric targets
- Passing JSON on stdin/stdout
- Set environment prior to call, read it during the invocation
Bind from JSON
Keyword-args for use with targets are easy to get with a combination of stdin, stdout, jq, jb, as we've seen previously.
Pairing that with bind.args.from_json
, you can unpack the input automatically, making it available to the rest of the target-body.
#!/usr/bin/env -S make -f
# Building on demos/structured-io.mk to demonstrate parsing structured arguments.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/kwarg-parsing.mk
include compose.mk
emit:
@# Emit JSON with jb
${jb} shape=triangle color=red
consume:
@# Parse data from JSON input with `bind.args.from_json`.
@# This binds a subset of JSON key/vals as bash variables,
@# optionally providing defaults when keys are missing.
$(call bind.args.from_json, shape color=blue name=default) \
&& printf "shape=$${shape} color=$${color} name=$${name}\n"
# Equivalent to `make emit | make consume`
__main__: flux.pipeline/emit,consume
Above, basically bind.args.from_json
takes a bind specification, and uses it to import a subset of JSON key/vals as shell variables. The bind-specification just describes that subset, optionally providing defaults when keys are missing. Anything without a default is considered "required". You can think of this idiom as a decorator, and in CMK-lang it actually looks like one..
$ ./demos/kwarg-parsing.mk
shape=triangle color=red name=default
Note that this sets shell variables, not make-variables, so names must be referenced with $${double_dollars}
.
The fine print: Usage assumes simple keyword-style dictionaries, and this isn't really designed to handle quoted stuff, multilines, etc. If names already exist in the environment, this overwrites the values! This also isn't optimized for speed of execution, but it's easy to read/write and eliminates lots of boilerplate.
Bind from Environment
Using bind.args.from_env
, you can set/load defaults without anticipating an input pipe at all.
#!/usr/bin/env -S make -f
# Building on demos/structured-io.mk to demonstrate parsing structured arguments.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/kwarg-parsing-2.mk
include compose.mk
# Or pass it from the command line..
export shape?=circle
__main__:
$(call bind.args.from_env, shape=default color=blue) \
&& printf "shape=$${shape} color=$${color}\n"
$ ./demos/kwarg-parsing-2.mk
shape=circle color=blue
Note that this sets shell variables, not make-variables, so names must be referenced with $${double_dollars}
. Note also that this is a useful way to set defaults regardless of whether you expected the data to be available in the environment.
The fine print: The same caveats exist here as with bind.args.from_json in terms of shadowing existing variables, so you should pay attention to scope, and probably want to pipeline your entire target-body using .. && ..
.
Bind from Parameters
AKA bind.posargs
, this can only be used with parametric make-targets. By default, it sets convenience-names for dealing with comma-separated positional variables, but arbitrary delimiters are supported too.
Note that this sets shell variables, not make-variables, so names must be referenced with $${double_dollars}
. All of the following variables will be defined, but might be empty:
Variable | Description |
---|---|
_1st | 1st positional argument |
_2nd | 2nd positional argument |
_3rd | 3rd positional argument |
_4th | 4th positional argument |
_head | Head of pos-args list |
_tail | Tail of pos-args list |
#!/usr/bin/env -S make -f
# Demonstrating parsing positional arguments in parametric targets.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/parsing-parameters.mk
include compose.mk
__main__: \
testing.comma_delimited/one,two,three \
testing.slash_delimited/one/two/three
testing.slash_delimited/%:
$(call bind.args.from_params, /) \
&& printf "\n1st=$${_1st} 2nd=$${_2nd} 3rd=$${_3rd} 4th=$${_4th}\n" \
&& printf "\nhead=$${_head} tail=$${_tail}\n"
testing.comma_delimited/%:
$(call bind.args.from_params) \
&& printf "\n1st=$${_1st} 2nd=$${_2nd} 3rd=$${_3rd} 4th=$${_4th}\n" \
&& printf "\nhead=$${_head} tail=$${_tail}\n"
$ ./demos/parsing-parameters.mk
1st=one 2nd=two 3rd=three 4th=
head=one tail=two,three
1st=one 2nd=two 3rd=three 4th=
head=one tail=two/three
Functions with Keyword Args
Advanced Topic
This is an advanced topic and safe for new users to ignore!
You might have noticed that internal functions for stuff like import statements also support named arguments. Building your own functions that support kwargs looks like this:
#!/usr/bin/env -S make -f
# Demonstrating using `mk.unpack.kwargs`.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/kwarg-parsing-3.mk
include compose.mk
# Define `target_factory`, which builds new targets on demand.
# Supports keyword-arg style input using `mk.unpack.kwargs`,
# where `shape` is required, and `color` is optional
target_factory=$(eval $(call target_factory.src, ${1}))
define target_factory.src
$(call mk.unpack.kwargs, ${1}, shape)
$(call mk.unpack.kwargs, ${1}, color, default)
$(call mk.unpack.kwargs, ${1}, quoted, default data)
${kwargs_shape}:
echo "A ${kwargs_color} ${kwargs_shape} /${kwargs_quoted}/"
endef
# Create targets for several shapes
$(call target_factory, shape=triangle)
$(call target_factory, shape=square color=black)
$(call target_factory, shape=circle color=yellow quoted='single quotes only')
# Exercise the dynamically generated targets
__main__: triangle circle square
$ ./demos/kwarg-parsing-3.mk
A default triangle
A yellow circle
A black square
Assertion Support
Without real typing or great tools for static-analysis, make
can benefit from a few ways to write more defensive code. Nothing here is very fancy, but it might save you from reimplementing stuff.
Environment Assertions
We've already met one method for asserting environment variables are set as part of the argument-parsing support, because anything that's mentioned without a default is considered required.
See also:
- mk.assert.env/<v1,v2..>, available as a target or a macro.
Filesystem Assertions
See also:
- mk.require.tool/<tool-name>, basically a wrapper on
which
, and available as a target or a macro.
Logging Facilities
One goal for compose.mk
is to enable program output that is clean, curated, and human-friendly on stderr, and machine-friendly on stdout. There are a few features available in support of this. In general compose.mk
prefers make-targets over make-functions and macros, because targets are more reusable.2 Logging is a special case though, and might use all 3.
Logging Basics
- Logging always goes to stderr so that targets remain pipe-safe, including the
log.json
output. - Log-messages have a prefix which is based on the call-depth, which can help visually parse complex traces.
See the compose.mk
source for other information about formatting colors and glyphs. Tools for working with JSON are also discussed in more detail in the section on Structured IO.
Log Functions
Lets look at the log.*
functions that are suitable for use from your project Makefile.
The following examples demonstrate basic logging, logging with formatting, and trace-logging.
#!/usr/bin/env -S make -f
# demos/logging.mk:
# Shows some of the compose.mk logging facilities.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: ./demos/logging.mk
include compose.mk
# Runs all the examples
__main__: \
logging_example.command logging_example.files \
logging_example.basic logging_example.json \
logging_example.formatting
logging_example.command:
@# Send stdout for any command to the logging stream
echo hello logging | ${stream.as.log}
logging_example.files:
@# Since `io.preview.file` writes to stderr, this is technically logging
${make} io.preview.file/demos/logging.mk
$(call log.preview.file, demos/logging.mk)
logging_example.basic:
@# Basic example, just write a message to log.
$(call log, unquoted message that should go to log)
$(call log.io, msg from io module)
$(call log.flux, msg from flux module)
$(call log.docker, msg from docker module)
logging_example.formatting:
@# Formatting example, using some of the available ANSI color constants.
$(call log, ${red}unquoted message ${sep} \
${no_ansi}that should ${dim}go to ${bold}log)
logging_example.trace:
@# Trace-logging example: this shows output only when the
@# variables `trace` or `TRACE` are set to 1 in the environment
$(call log.trace, unquoted message that should go to log)
logging_example.json:
@# JSON-logging: decodes input with `jb`,
@# then pretty-prints corresponding JSON to stderr.
@# This is indented by default, but also expanded.
@# Use `log.json.min` for minified version.
$(call log.json.min, stage=Building)
$(call log.json, stage=Building more=info anything=you_want)
File and Stream Previews
Whether you're working on build processes, data-pipelines, or infrastructure deployments, it's convenient to have the ability to "preview" things like files, streams, or intermediate representations. The previous section covers some simple cases.
The docs for compose.mk's built-in tool wrappers are also related here, so here's a look at some of the tests that are exercising that code:
#!/usr/bin/env -S bash -x -euo pipefail
# tests/tool-wrappers.sh:
# Exercise some of the tool-wrappers that are part of stand-alone mode.
#
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: bash -x tests/tool-wrappers.sh
# Use `stream.chafa` to preview an image on the console with chafa
cat docs/img/icon.png | ./compose.mk stream.chafa
# Use `stream.chafa` to preview an image on the console with chafa (alternate)
cat docs/img/icon.png | ./compose.mk stream.img.preview
# Preview image without streams
./compose.mk io.preview.img/docs/img/icon.png
# Preview multiple images
find docs/img/icon.png | ./compose.mk flux.each/io.preview.img
# Use `stream.glow` to preview markdown
cat README.md | ./compose.mk stream.glow
# Use `stream.glow` to preview markdown (alternate)
cat README.md | ./compose.mk stream.markdown
# Use `stream.pygmentize` to syntax-highlight code
# This uses pygments, default style, and a best-guess lexer
cat Makefile | ./compose.mk stream.pygmentize
# Use `stream.pygmentize` to syntax-highlight code
# This uses pygments with an explicit lexer and default style
cat Makefile | lexer=Makefile ./compose.mk stream.pygmentize
# Use `stream.pygmentize` to syntax-highlight code
# This uses pygments with an explicit style and lexer
cat Makefile | style=monokai lexer=Makefile ./compose.mk stream.pygmentize
# # Use `stream.json.pygmentize` to preview JSON (minified)
./compose.mk jb key=val | ./compose.mk stream.json.pygmentize
# Use `stream.json.pygmentize` to preview JSON (expanded)
./compose.mk jb key=val | ./compose.mk jq . | ./compose.mk stream.json.pygmentize
# Use `stream.peek` to preview data.
# Put this somewhere in the middle of a pipe
./compose.mk jb key=val | ./compose.mk stream.peek | ./compose.mk jq .
# Pull data from yaml with yq
echo 'one: two' | ./compose.mk yq .one
Visual Dividers
Note
If you want to make extensive use of these kinds of banners, you can organize around stages to get them implicitly.
Visually separating stages with some kind of output divider is something that is nice to have for otherwise messy output. Here's an example of several different ways of using io.print.banner
to get labeled or timestamped dividers that are shown between clean/build/test stages.
#!/usr/bin/env -S make -f
# demos/section-dividers.mk:
# Shows some of the compose.mk logging capabilities.
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
#
# USAGE: ./demos/section-dividers.mk clean build test
include compose.mk
__main__: clean build test
# Use `io.print.banner` implicitly as a prereq => Timestamped divider
clean: io.print.banner
echo Cleaning stuff
# Call `io.print.banner` explicitly => Full control over divider label
build:
label="Build Stage" ${make} io.print.banner
echo Building stuff
# Use `io.print.banner` as a macro => Automatically set label as the current target's name
test:
${io.print.banner}
echo Testing stuff
label="divider-using-gum" ${make} io.draw.banner
label=test1 ${make} io.figlet
label=test2 ${make} io.with.color/dim,io.figlet
label=test3 ${make} io.with.color/cyan,io.figlet
label=test4 ${make} io.with.color/red,io.figlet
In the build
task above, you can see one way to call io.print.banner
explicitly, which uses ${make}
as shorthand for "the right way to call make
recursively". As an equivalent, you can always use an invocation like ./compose.mk io.print.banner
instead, and this works from anywhere.
Use Gum for Banners
Even though io.print.banner/<label> detects terminal width, it isn't fancy otherwise. A related target is io.draw.banner/<label>, which prints dividers with charmbracelet/gum.
Banners with Figlet
You can also use a (dockerized) version of the figlet
tool, like this:
# or, ./compose.mk io.figlet/banner
$ label=banner ./compose.mk io.figlet
██
░██
░██ ██████ ███████ ███████ █████ ██████
░██████ ░░░░░░██ ░░██░░░██░░██░░░██ ██░░░██░░██░░█
░██░░░██ ███████ ░██ ░██ ░██ ░██░███████ ░██ ░
░██ ░██ ██░░░░██ ░██ ░██ ░██ ░██░██░░░░ ░██
░██████ ░░████████ ███ ░██ ███ ░██░░██████░███
░░░░░ ░░░░░░░░ ░░░ ░░ ░░░ ░░ ░░░░░░ ░░░
To add color, you can use a context manager and any of the colors that compose.mk
natively understands.
./compose.mk io.with.color/red,io.figlet/banner
For other TUI-components like this that fall short of a full-blown interface, see these docs for stand-alone mode. See also the full documentation for all io.*
targets
Workflow Support
The flux module, aka the flux.*
target namespace, provides a tiny but powerful workflow/pipelining API, roughly comparable to something like declarative pipelines in Jenkins1, providing concurrency, retry operators, etc. You can also think of it as "target algebra" for make
, similar to how bash provides process algebra.
What flux.*
targets add is basic flow-control constructs and higher-level join / loop / map instructions over other make targets, taking inspiration from functional programming and threading libraries.
Alternatively: You can think of flux as a programming language where all primitives are the objects that make
understands (e.g. targets, defines, and variables). Since every target in make
is a DAG, you might say that task-DAGs are also primitives. Since compose.import
maps containers onto targets, containers are primitives too. Since tux targets map targets onto TUI panes, UI elements are also effectively primitives too.
See also
For more background, see the style docs and specifically the targets-as-functions docs.
Workflow Rationale 🤔
Internally, the flux module was started in support of the the embedded TUI, but it's also useful in general.
This sections starts by introducing concepts from the CLI. Besides using workflows in your existing automation, you can also refactor existing bash using flux primitives without any project integration.
As an example of this, here's a few quick one-liners:
# Terminal friendly preview for all the project images
$ ls docs/img/*png | ./compose.mk flux.each/io.preview.img
# Preview for project source, with syntax highlighting
$ ls demos/*mk | ./compose.mk flux.each/io.preview.file
# Validate all the demo source code
$ ls demos/*mk | ./compose.mk flux.each/mk.validate
Above, flux.each/<fxn> takes a newline-separated stream for input, then runs the named (unary) target on each word.
One-liners aren't really the point though, and the main use-case for flux
is scripting on top of it in your own projects. But why bother? Well, by lifting flow-control and iteration out of bash/make primitives and up into flux
, it tends to completely eliminate a major source of errors, readability issues, and downright brutal syntax to end up with an elegant alternative that actually encourages good design.
To illustrate, consider this snippet of the mini-ETL demo that uses flux:
# Declare an ETL pipeline, using `flux.pipeline` to bind tasks together.
# Roughly equivalent to => `make extract | make transform | make load`
etl: flux.pipeline/extract,transform,load
# Declare a "safe" version of the pipeline that handles failure/cleanup
etl.safe: flux.try.except.finally/etl,etl.failed,etl.cleanup
See the demo for the full context, but let's notice how clean and declarative this is. Individual targets like extract/transform/load cannot be perfectly flat like this of course, and at some point you have to dip into bash. But when targets inevitably become complex, the sheer simplicy of flux
syntax over make/bash idioms provides a healthy incentive to break things up, which in turn not only improves readability, but improves composability at the same time. Meanwhile targets like flux.each
and even flux.apply
also show a bias towards functional style, which tends encourage clean loops, heavy usage of streams instead of temporary files or variables, etc.
Workflow Basics
Before we get into code examples, let's introduce basic principles with the command line.
# Introducing the `flux.ok` target:
$ ./compose.mk flux.ok
flux.ok // succceeding as requested!
# Introducing the `flux.fail` target:
$ ./compose.mk flux.fail
flux.fail // failing as requested!
make: *** [compose.mk:..: flux.fail] Error 1
Since using compose.mk
is the same as using make
, creating simple task DAGs works the way you expect:
# The 2nd target never runs, and the command fails.
$ ./compose.mk flux.fail flux.ok
flux.fail // failing as requested!
make: *** [compose.mk:1677: flux.fail] Error 1
Above, flux.ok
and flux.fail
are just make-targets that exit with success and failure. Like true
and false
in shell, there are no arguments (aka nullary). You can think of flux.ok
and flux.fail
as placeholder targets for ones that you'll provide yourself.
Another target that's useful for testing is flux.echo
, which accepts one argument.
# Introducing the `flux.echo` target:
$ ./compose.mk flux.echo/BANG
BANG
Dispatch
For a more advanced example, let's use flux.ok
with scaffolded targets for container dispatch, and use loadf to avoid a project Makefile. The following command loads the "debian" container from demos/data/docker-compose.yml
, then runs the flux.ok
target inside that tool container.
$ ./compose.mk \
loadf demos/data/docker-compose.yml \
debian.dispatch/flux.ok
loadf // demos/data/docker-compose.yml // debian.dispatch/flux.ok
loadf // demos/data/docker-compose.yml // Validating services .. ok
loadf // demos/data/docker-compose.yml // Validated 3 services total
loadf // demos/data/docker-compose.yml // Starting downstream targets
flux.ok // succeeding as requested!
Flow Control
As seen in the scaffolded <service_name>.dispatch
target from the last example, most targets in the flux
module accept other targets as arguments.
Let's introduce flux.try.except.finally
as a simple idiom for flow-control. This target accepts 3 comma-separated arguments for a <try_target>
, <fail_handler>
, and a <always_handler>
. Since we have flux.ok
and flux.fail
in hand, we'll fill in the blanks with that.
# Demonstrate `flux.try.except`,
# passing in 3 comma-separated target-names as arguments.
$ ./compose.mk flux.try.except.finally/flux.fail,flux.ok,flux.ok
flux.try.except.finally // flux.fail,flux.ok,flux.ok
flux.fail // failing as requested!
make[1]: *** [compose.mk:..: flux.fail] Error 1
flux.ok // succceeding as requested!
flux.ok // succceeding as requested!
The semantics are what you'd expect from try/except/finally construct, so the exit status is zero for success.
$ echo $?
0
You can perform similar tests for other flow-control, like ./compose.mk flux.if.then/flux.ok,flux.ok
.
Iteration, Pipelining, & Looping
Introducing flux.each/<target> using flux.echo
:
$ printf 'one\ntwo' | ./compose.mk flux.each/flux.echo
one
two
Introducing flux.and/<target_list> and flux.or/<target_list>:
$./compose.mk flux.and/flux.ok,flux.ok
flux.ok // succeeding as requested!
flux.ok // succeeding as requested!
$ ./compose.mk flux.or/flux.fail,flux.ok
flux.fail // failing as requested!
make[1]: *** [..: flux.fail] Error 1
flux.ok // succeeding as requested!
Introducing flux.loop.until/<test_target>:
# Runs forever
$ ./compose.mk flux.loop.until/flux.fail
flux.loop.until // flux.fail (until success)
flux.loop.until // flux.fail (until success)
...
Workflow Demos
What's covered on this page isn't exhaustive, see also the full API, and the next section that focuses on functional-style. For examples of other idioms, see flux.loop/arg, flux.retry/arg, flux.timer/arg, flux.mux/arg, flux.timeout/arg, etc.
Related Links
For more fleshed-out examples using workflows, see also:
Functional Style
We've met several concepts borrowed from functional programming constructs already. Sometimes aliases are in place to make this more clear.. for example, the flux.and
and flux.or
just different names for flux.all
and flux.any
. A version of flux.each
that works directly with args instead of streaming input is called flux.for.each
, aka flux.map/<args>.
Apply, Map, Starmap
Functional friends like apply, starmap, etc are available:
$ ./compose.mk flux.map/flux.echo,hello,world
hello
world
$ ./compose.mk flux.apply/flux.echo,hello-world
hello-world
The flux.starmap/<fxn,generator> takes 2 arguments, then applies the function to each value generated by generator. Demonstrating this with only other builtins isn't very practical.. but one of the things you can do is unpack flux.map
.
# Itertools-ish or generators
$ ./compose.mk flux.starmap/flux.echo,flux.map/flux.echo,hello,world
hello
world
An analog to function composition can be achieved with flux.pipeline/<targets>, but the functions involved downstream (in our case stream.echo) need to accept pipes.
# Pipelines are function composition.
#
$ ./compose.mk flux.pipeline/flux.echo/hello,stream.echo
hello
Partial Functions
Partials are not very difficult to get with vanilla make
, and that approach is generally better.
However __flux.partial__
adds another kind of support focused on quick partials for binary functions. See the snippet below:
#!/usr/bin/env -S make -f
# demos/partials.mk:
# Demonstrates implementing something close to partial functions with compose.mk.
#
# Part of the `compose.mk` repo. This file runs as part of the test-suite.
# USAGE: ./demos/partials.mk
include compose.mk
adder/%:
@# Unpacks the comma-separated arguments and adds them together.
expr $(call mk.unpack.arg, 1) + $(call mk.unpack.arg, 2)
add7/%:
@# Defining a partial function explicitly, nothing fancy
${make} adder/7,${*}
# Dynamic target creation with __flux.partial__.
# This creates `add2` target dynamically
$(call __flux.partial__, add2, adder, 2)
__main__: adder/1,3 add7/3 add2/10
Other Operators
Sorry, reduce and fold operators are deliberately ommitted as they don't often find use-cases in automation. If this fills you with sadness, there is the consolation prize that signals in compose.mk are somewhat like call/cc =P
Topics related to currying are explored a bit in the discussion of style for lack of a better place.
Reflection Support
Reflective programming is the ability of a process to examine, introspect, and modify its own structure and behavior.
Features like CLI help and automatically generated API docs require that compose.mk
has at least some ability to parse its own contents.
And if we need that kind of ability, why not allow for parsing of other Makefiles? As another applied example, see the zero-config TUI documentation, which makes use of mk.include/<fpath>.
Most of the meta-tooling that's available is somewhere in the mk.*
target namespace.
Makefile Metadata
For metadata parsing, you can use the mk.parse/<fname>
target, and get back targets, target chains, target docs, etc.
Related API |
---|
mk.parse.block/<arg> |
mk.parse.local |
mk.parse.module.docs/<arg> |
mk.parse.shallow/<arg> |
mk.parse.targets |
mk.parse.targets/<arg> |
mk.parse/<arg> |
For testing, there is a super lightweight demos/no-includes.mk
Makefile included in the repository for situations like this. Like the name suggests, it doesn't actually include or use compose.mk
in any way.
$ ./compose.mk mk.parse/demos/no-include.mk
{
"clean": {
"file": "demos/no-include.mk",
"lineno": 13,
"chain": [],
"type": "file",
"docs": [""],
"prereqs": []
},
"build": {
"file": "demos/no-include.mk",
"lineno": 14,
"chain": [],
"type": "file",
"docs": [""],
"prereqs": []
},
"test": {
"file": "demos/no-include.mk",
"lineno": 15,
"chain": [],
"type": "file",
"docs": [""],
"prereqs": []
},
}
Under the hood, the heavy lifting for Makefile-parsing uses a dockerized version of the pynchon
tool.
Validation and Planning
Validation of external Makefiles is used as part of code-generation and compilation, but it's also exposed for general use.
$ ./compose.mk mk.validate/Makefile mk.validate/compose.mk
✱ mk.validate // Makefile .. ok
✱ mk.validate // compose.mk .. ok
Reading Defines
Reading define ... endef
blocks from makefiles is important for several different compose.mk
features that rely on embedded data. And crucially, the contents of those blocks must not be mangled, escaped, interpolated, etc.
$ ./compose.mk mk.def.read/Dockerfile.stream.pygmentize
FROM ${IMG_ALPINE_BASE:-alpine:3.21.2}
RUN apk add -q --update py3-pygments
This feature is the foundation that all sorts of other stuff is built on, including support for polyglots and the embedded tui as well as inlined Dockerfiles, etc.
Grabbing Symbols
Reading defines is the most common need, but you can also get the expanded value of arbitrary symbols using mk.get
. This is sometimes useful for previewing what macros actually do.. for example to show the implementation of ${stream.nl.to.comma}
:
$ ./compose.mk mk.get/stream.nl.to.comma
( cat /dev/stdin | awk 'BEGIN{ORS=","} {print}' | sed 's/,$//' )
For a more practical example, see this section of the compiler docs.
Pattern-Matching on Targets
There are various ways to enumerate available targets for the current runtime, either restricting it to targets defined in the local file, or adding targets from include
as well.
One nice thing about this is the ability to easily dispatch all targets matching a given pattern.
#!/usr/bin/env -S make -f
# Demonstrates pattern matching on target-names.
# Part of the `compose.mk` repo, runs as part of the test-suite.
# USAGE: ./demos/flux.star.mk
include compose.mk
__main__: flux.star/test
test.1:; echo 1
test.2:; echo 2
test.3:; echo 3
no.match:; echo never called
See also the API docs for flux.star/<pattern>.
Dynamic Includes and Targets
To a certain extent, compose.mk
can do runtime modification of make
itself (or at least simulate it). Under the hood, this works by using code-generation to create a new Makefile just in time, and then turning execution over to it. Also somewhat related is the interpreter, and the CMK compiler.
Create a new target dynamically. Here's an example that creates a new target on the fly, assigning it to an existing target that simply returns success. After the new target is defined, it can be used on the same command line.
./compose.mk mk.let/foo:flux.ok foo
✱ mk.let // foo:flux.ok // Generating code .. ./.tmp.5xdPP3008
Φ flux.ok // succeeding as requested!
Dynamic includes. Suppose you want to script with compose.mk
, but the project makefile does not include compose.mk
. You can mix together automation APIs like this:
$ ./compose.mk mk.include/demos/no-include.mk flux.timer/clean
cleaning
Φ flux.timer // clean // done in 0.000000000s
What just happened? Pulling apart this command, note that compose.mk
offers no clean
target, and demos/no-include.mk
has no flux.timer
target.
The command above is similar to make -f .. -f ..
but retains the compose.mk API and signal-handling. Or to put it another way, it executes the following Makefile just in time, then proxies commands to it:
include compose.mk
include demos/no-include.mk
Forks, Guests, and Payloads
Note
For bundling a lot of files and/or additional data, see also the packaging demo, which provides similar capabilities via self-extracting archives. This method is more lightweight and tactical.
For a few specific use-cases, compose.mk
has the ability to fork it's own source code. Helpers related to this are in the mk.fork
namespace. All unary targets require a filename, and nullary ones assume that input is on stdin.
Related API |
---|
mk.fork.guest |
mk.fork.guest/<arg> |
mk.fork.payload |
mk.fork.payload/<arg> |
mk.fork.services |
mk.fork.services/<arg> |
mk.fork/<arg> |
In each case, targets return the source code that results from merging the given file-contents with the current source-context. Merges happen in specific sections, and each section has different semantics.
- Guests
- are expected to be Makefile source code. It's appended after the main body of source, similar to dynamic includes.
- Payloads
- are expected to be raw data, and this input is always wrapped with
define PAYLOAD .. endef
. This could be JSON, polyglot source code, or anything else. - Services
- are expected to be docker-compose. This input is always wrapped with
define SERVICES .. endef
, and afterwards_compose.import.string
is used, bringing all that service scaffolding into the root namespace.
You can use overrides in these sections together to bundle projects as tools, or to bundle extensions + core libraries + related containers into a single script.
The average use-case looks like this:
# Output new executable script to "./tool.mk"
$ bin=./tool.mk ./compose.mk mk.fork/Makefile,docker-compose.yml
# Result is executable.
# Also has access to project automation library,
# and all the tool containers have target scaffolding
$ ./tool.mk debian.ps
$ ./tool.mk debian.shell
$ ./tool.mk debian.dispatch/...
..
Above, mk.fork
is just helper that uses the mk.fork.*
primitives, but you can also use them piecewise:
# Starting with `compose.mk` as a base, create
# a "forked" script that wraps the project makefile
$ cat Makefile | ./compose.mk mk.fork.guest > fork1.mk
$ chmod +x fork1.mk
# Use the fork to generate the next fork,
# this time setting default services from the docker-compose file
$ cat docker-compose.yml | ./fork1.mk mk.fork.services > fork2.mk
# Cleanup, rename result
$ rm fork1.mk
$ chmod +x fork2.mk
$ mv fork2.mk tool.mk
..
See the related demos/payload.mk tests for more details.
User Input / Interactivity
The stand-alone tool docs discuss usage of mk.select/<path_to_makefile> and compose.select/<path_to_compose_file> for interactively choosing targets or containers to run.
This section covers lower-level usage of the API and some idioms for capturing user data. See also the Advanced TUI docs, where we build an interface that's composing these primitives.
Note: Making user-input primitives for compose.mk
in a way that is portable (i.e. using gum
directly if available and falling back to docker otherwise) and is correctly using ttys/subshells is hard. It gets even more complicated because we want primitives to work reliably from the docker host, inside other docker containers, and is also compatible with things like tmux
. Another wrinkle is that official gum images are distroless, so there are edge-cases where gum spin
cannot use sleep
, etc.
Despite all the challenges, related functionality in compose.mk
works pretty well, but you can probably find ways to break it, especially if your IDE has a wonky terminal emulator. Anyway, another consideration is that blocking on user-input from automation is usually rude! So if you're building workflows around this, choose wisely, and please report bugs.
User Input Basics
One of the simplest ways you can get started scripting with user-input is to use a target from the flux.select.*
group. Consider the example below:
#!/usr/bin/env -S make -f
# demos/tui/user-input.mk:
# Demonstrates user-input with a file selector
# USAGE: ./demos/tui/user-input.mk
include compose.mk
__main__:
pattern='*.mk' dir=demos \
${make} flux.select.file/mk.select
What does this do?
- First we setup the file-selector, telling it we want a file in a directory that matches a pattern.
- The selector collects user input, then passes it to to the downstream target, in this case mk.select/<Makefile>.
Since mk.select
accepts Makefile-paths as input and then opens a target chooser, the example above shows layered user-input being used as a runner, i.e. generating and running a call like make -f <chosen_Makefile> <chosen_target>
. In other words make demo
becomes a demo chooser/runner in one line.
You can of course replace mk.select
with any other unary target, creating your own if necessary.
Running it looks like this:
Quick links to the related API are found below.
User Input Idioms
If you want to select something besides a file, a make target, etc, it is usually best to use io.selector/<choice_generator>,<choice_handler>, then define your own simple targets for the generator and handler.
Using a dispatch'y approach like the ones mentioned above is tidy and usually good enough, but lower-level access is also possible. Another useful idiom is the choices..choice..chosen pattern with the io.get.choice
macro. Consider the following example:
#!/usr/bin/env -S make -f
# demos/tui/user-input-2.mk:
# Demonstrates user-input with a file selector
# USAGE: ./demos/tui/user-input.mk
include compose.mk
__main__:
header="Choose a moirai" \
&& choices="clotho lachesis atropos" \
&& ${io.get.choice} \
&& echo "chosen=$${chosen}"
Above, ${io.get.choice}
expects choices
and optionally header
as set, and emits chosen
for later use. In the middle is gum choose
, running with or without the local tool available.
References
-
See also jenkins official docs for pipeline syntax. ↩↩↩
-
Make-targets can be used as task-prerequisites as well as command line entrypoints. ↩
-
https://github.com/h4l/json.bash ↩
-
https://github.com/jqlang/jq ↩
-
https://github.com/mikefarah/yq ↩
-
Yes, technically JSON is only "semi-structured" without additional schema commitments, etc ↩
Terminology
"Structured" is intentionally vague here, and there's a few ways it's applicable to basic features in
compose.mk
. Usually structured just means means JSON 6. There is also support for a concept of stream types, which isn't really typed, but at least lets you tell markdown content from image content, newline-delimited vs space-delimited, etc. And in terms of datastructures, there is basic support for stacks and stages, and basic support for keyword-args For things like sets datastructures, see also the (unrelated) GMSL library.