Motivation & Design Philosophy


Intro


Most of the documentation deals with how, but if you want to know more about why, then the rest of this page is for you.

This is basically a collection of essays rather than FAQ, and every section is probably better as an expanded blog post or something, but it might address what you're interested in. Much of the content here is thoughts on the landscape of automation challenges in general. But anticipating objections, much of the content is preoccupied with certain evangelism and advocacy.

You can take or leave any of the major components, but the most controversial design choices for compose.mk will be around the usage of make itself, or perhaps docker compose and/or usage of DIND + DOOD, aka docker-in-docker and docker-out-of-docker. Much ink (and maybe some blood) has been spilled over both advocacy and protest for these components. You can probably start a fight today over coffee in the break room or on the orange site, and you'll get the usual references.1

Is this even for real?


Some people might be skeptical about getting involved with the biggest, baddest, and most highly-powered mutant Makefile the world has ever seen, so let's be super clear about a few things:

The source code for compose.mk weighs in at an eye-popping 2500+ lines of pure Makefile, and many features use on-the-fly code-generation. There are other eyebrow raising implementation details, including use (abuse?) of POSIX signals and lots of white-knuckled, wild-eyed metaprogramming. One might even say that the whole implementation is a very long list of terrible hacks that can only be redeemed by a but look what you can do with this! moment.

Well, pushing the boundaries of a simple toolchain is part of the goal. At the same time, this isn't some esolang flex like a fractran compiler or regex-chess, or at least it isn't only that.

Originally, compose.mk started as an experiment in basic docker-support for make, then became an experiment in dependency minimalism in general, but it just never hit a wall, and it seems like it ended up in a very interesting place.

Part of what compose.mk adds is various primitives and idioms that make make feel like a Real Languageā„¢. Just in case the library offerings won't help with that.. then you can always opt in to using a transpiler and even redefining the syntax for your use-case.

You would think that minimalism cannot be compatible with aggressively reusing available tools. But in the limit, scripting with compose.mk feels like a genuinely new way to write glue code that pulls in foreign tools and code as first class objects in a way that's more organized than adhoc.

Speaking of "real languages", compose.mk can actually help you write make-targets in foreign languages too.. but that's just another example of the kind of thing that might make people nervous! Opinions will differ about whether this kind of sorcery is really a good idea. Maybe if you see something that offends your sensibilities let's agree to call it a prototyping workflow? ;)

Cue the obligatory disclaimer

With great power comes great responsibility, and like most things you'll have to exercise some judgement. Some of the demos are pretty crazy, and perhaps no one will thank you for going wild with ill-conceived applications of mad science techniques inside real projects.

At least in part, compose.mk is an experiment and an art project5. But sometimes a gruesome hack should be weighed against the utility it provides, and evaluated things in terms of simplicity, stability, and portability rather than purist or traditional aesthetics. Just don't be antisocial by embedding a gnarly and frequently-changing 300-line Dockerfile into your team's Makefile, and you'll probably be fine.

With that stern warning out of the way.. Yup, compose.mk is for real. At this point it really is esolang-inspired in some ways, but it also solves real problems, real fast. And beyond using it to solve traditional problems in better ways, there's a good chance you'll want to use it for completely different kinds of use-cases.

A related question is, should compose.mk really be implemented in make? Well, the standard library kind of needs to be or else it's not a standard library. But this isn't strictly necessary for matrioshka automata, polyglots, and CMK-lang. One way to look at this stuff is that it's an proof of concept that's also an executable specification, it's lightweight, portable by default, and actually useful. A rewrite of core features in another language is tempting.. but it does raise the question of whether using a docker API would be more or less stable, more or less lines of code, more readable or writable than just leaning on the CLI interface.

Convincing the Anxious


How about some redeeming qualities to wave away those eyebrow-raising details?

Line Counts:
The compose.mk line count is scary.. but code-to-comment ratios throughout are ~ 1:1 without even counting the main project documentation. Makefile as a language is often scary too.. but compose.mk deliberately avoids advanced features whenever possible. Working with compose.mk is very different from working on it, and should be pretty friendly for the novice as well as the wizard.
Less Shell Scripting:
On a related note, the compose.mk backend necessarily uses lots of shell-scripting, but it does that in order to build up enough critical mass to avoid shell-scripting afterwards. The general effect of using workflows with flux is to push whatever bash you still need to write away from awkward procedural stuff with conditionals and loops, and towards a more declarative style of just doing things with tools.
Self Hosting: (docs)
The docs, demos, and test-suites are pretty extensive, and as a project, compose.mk is mature enough to be self-hosting, handling generation of its own docs and running its own test-suite. The suite itself also confirms that docker-in-docker usage of tools won't make CI/CD lose its mind.
Self Hosting: (code)
As a tool/library, compose.mk is largely written in compose.mk rather than what you'd really call Makefile or shell. For example, the workflows support powering the TUI generator. (It's not possible to get coverage-data from the test suite, but dog-fooding like this should be a confidence builder.)
Change Freeze:
The API explicitly aims at frozen status, and after compose.mk is both internally consistent and general enough extend, then the project is finished. Bugfixes if necessary, but no "forever" development, and tracking upstream is optional.

A Problem Statement


So, automation, right? How should we even do it? People tend to have strong opions about this topic, but here are some observations that probably aren't too controversial:

Orchestration between or across tool containers is usually awkward.
This is a challenge that needs some structure imposed. You can get that structure in lots of ways, but it's always frustrating to see your work locked into esoteric JenkinsFile / GitHubAction blobs where things get complicated to describe, run, or read. Project automation ideally needs to run smoothly both inside and outside of CI/CD.
If running commands with different containers is easy,
Then there is less need to try and get everything into one tool container. The omnibus approach is usually time-consuming, and can be pretty hard if very different base images are involved.
Tool containers are most useful when you can easily dispatch commands to them,
Especially without a long, fragile CLI invocation. A compose file specifying volumes and such helps a lot, but no one wants docker run or docker compose run littered all over your scripts for builds and orchestration. Avoiding friction here seems like a small thing, but it matters a lot.
Tool containers don't necessarily change that often,
But there are maybe a lot of them, and if you need a separate git repository to describe / build / ship each of them to a registry before you can use them elsewhere, it's painful. Prototyping and debugging are awkward, but bootstrapping new projects is especially awkward when starting anything is blocked on having new git repos, new docker-registry repos, and new CI/CD for the same.
Plain shell scripts won't take you far.
Everyone knows that readability and maintainability isn't great, but that's just the start. To name a few related issues.. features involving option/argument parsing, multiple entrypoints, code-reuse, partial-execution for partial-updates, dry-runs, parallelism, and other things you're going to need just aren't simple to get.
CM tools like Ansible can fix some things, but bring their own problems.
A few examples of those problems are: Significant setup, significant dependencies, ongoing upstream changes, and the fact that many people cannot read or write it. CM tools aren't easy to onboard to or simple to extend, i.e. they are most useful only if you're already using them, and if they already support exactly what you're trying to do.
The general need for "glue" code and glue languages is getting larger all the time,
But the niche is still woefully underserved. This isn't just about devops either, because things like notebooking and data-pipelines stand to benefit.

The Happy Medium


Much more controversially: make is the happy medium here, despite the haters, the purists, and the skeptics who argue that make is not a task-runner, and despite the often-misunderstood recursive-make-harmful paper.

That's because make is just too good to ignore, and there are several major benefits:

  1. It is old but it is everywhere, it's expressive but has relatively few core concepts, and it's fast.
  2. It's the lingua franca for engineers, devops, and data-science, probably because easy things stay easy and advanced things are still possible.
  3. It's the lingua franca for javascript, python, or golang enthusiasts who need to be able to somehow work together.
  4. Most importantly: make is probably the least likely thing in your toolkit to ever be affected by externalities like pypi or npm breakage, package updates, or changing operating systems completely; make won't care if you run it from your laptop, or Github Actions, or Airflow.

If you need something outside of docker that you want stability & ubiquity from, it's hard to find a better choice. As a bonus, most likely tab-completion for make-targets already works out of the box with your OS and shell, and to a certain extent, make can even support plan/apply workflows (via --dry-run) and parallel execution (via --jobs).

And for that person who hates tabs, you'll be excited to learn that you can use .RECIPEPREFIX and probably surprised that support for using space instead is available for ~15 years now. See this discussion for more details.

In short, the language agnostic all purpose incredible makefile6 was pretty great before, and with compose.mk it's almost unstoppable =P

The biggest single problem for many modern use-cases is just that Makefiles have nothing like native support for running tasks in containers, but this is exactly what compose.mk fixes. Makefiles are already pretty good at describing task execution, but describing the containers themselves is far outside of that domain. Meanwhile, docker-compose is exactly the opposite, and easily handles runtime specs while it struggles with tasks. Since both are defacto standards for the mutually exclusive stuff that they specialize in, this means make/compose is a perfect combination.

Returning to the question of what make really is..

It's true that it's not a task runner, but it's also not a build tool. It's a metalanguage. And compose.mk organizes and generalizes that potential to the extent that, while it is backwards compatible if you're into that, it is best thought of as a different kind of language altogether. In other words if you love make you'll love compose.mk, but even if you truly despise make, there's some reason to believe that you might like compose.mk anyway.

As an extension to vanilla Makefile, CMK-lang is worth considering, although adopting it is not required. In the CMK-lang overview, make is compared to a gadget, and besides being ubiquitous, it's actually the unique minimal gadget that supports macros and DAGs. Putting it in this light, there's really no equivalent alternative available and staying close to shell/containers for offloading complexity to other tools is just a bonus.

Obligatory Compare/Contrast


Yes.. there are things such as remake, taskfile, and just to give a few examples, but if you glanced at the rest of the documentation, you'll probably realize the goals and feature sets are significantly different. Even if compose.mk were actually aimed at being a task-runner, any sufficiently advanced runner could host the other ones, so it's not really that kind of contest! Projects that are large enough may even have room for multiple task runners, each doing whatever they are best at. But this is the part where one is supposed to say the competition sucks, so here it is :)

With big projects, just using just might mean asking like 100 people to install something, which only covers laptops and not other contexts / environments. Idiomatic compose.mk is different than make, and the usual quirks people complain about4 don't really come up.

Again, feature sets differ, so the comparison is pointless.. but in terms of back-end implementation just source weighs in at ~100 files, with ~20k lines of code, and in comparison to compose.mk that's ~100x and ~10x, respectively. Makes perfect sense right?, because just has to rebuild make, whereas compose.mk just uses it. The most frequent reasons cited for just over make are probably things like polyglots, task listing & online help, "private" targets, etc. All of which are provided by compose.mk more or less, with bonus native support for docker, TUIs, datastructures, etc.

Bugs & Dependencies

Without a doubt the more advanced features of compose.mk have more bugs and/or limitations compared to just, but compose.mk aims to do more, and ultimately it's about as stable as the things that it builds on (e.g. docker, bash, tmux, and make).

In terms of minimalism and dependencies, just just requires just, and is available from package managers, whereas compose.mk does require make and most use-cases will want docker.

For that matter, we have to use things like cut, sed and awk too, plus you'll want git or curl to install it! Still, a direct comparison in terms of "minimalism" isn't really possible. If you want to use tools like jq from just, then you need those too, whereas compose.mk wraps things like jq implicitly and provides easy access to everything else as long as it's available (or constructable) with docker.

In the end, the main differentiator for compose.mk is that it's designed first and foremost to be extended. It moonlights as a tool, but in many ways is much closer to an extensible scripting language.

The Top of the Stack


Tools like ansible, cloudformation, docker, docker-compose, eksctl are just a few examples where it's normal to have extremely long & complex command-line invocations, awkward to type and hard to remember.

Those invocations possibly depend on environment variables and other config-context for correct behaviour. You can't get around this with bash-aliases, because developers won't have those in sync, and problems with plain bash scripts have already been discussed. Sourcing .env files or loading bash functions as part of developer-shells all tends to create issues, because people lose track of the env state in one tabbed terminal vs another, or the state is unavailable to their IDEs, etc. Complicating the matter further, some of these tools need access to the same config data, and some operations require multiple tools, or data-flow between tools.

Having a well defined "top" of your stack that sets some context, and provides aliased entrypoints for cumbersome-but-common stuff becomes really important. Just as important, that context needs to be project based, and shouldn't leak out into your shell in general, or the IDE in general, or your system in general.

Makefiles are the obvious choice here, because they enable everything and require nothing, allowing for a pretty seamless mixture of config, overrides, entrypoint aliases, context management, task orchestration, and new automation layers that connect and recombine existing ones.

No Golden Version for Frameworks


If you're using large frameworks like Terraform/Ansible at all, then there's a good chance you'll eventually need multiple versions of that framework, at least temporarily. You can even see this at the level of tools like awscli and the split for v1/v2.

Basically your options at a time like this are to:

  1. Start building a "do everything" golden container or VM, put both versions in the same place.
  2. Start messing around with tools like virtualenv, tox, terragrunt, or asdf for sandboxing different versions.
  3. Start replacing lots of foo-tool invocations with awkward-but-versioned docker run foo/foo-tool:VERSION ... commands.
  4. Rely completely on CI/CD like Jenkins/Github or workflow-engines like Atlantis or Argo for mapping your tasks onto versioned containers.

Choices 1 can be labor intensive and fragile; choice 2 can be fragile, tooling may be controversial, or teammates may not opt in. Choice 3 is ugly, fragile, and particularly hard to maintain. Choice 4 is maybe fine once it's actually working, but it' also basically punting on all local development forever, so it can be painful to change or debug. In the worst-case, Choice 4 also has the downsides that you're accepting platform lock-in as well as betting everything on a single point of failure.

Alternatively.. you could manage your tool-containers with docker compose, then launch any target in any container with compose.mk's approach to target dispatch. You can still call make from CI/CD platforms. In fact, while you added smooth workflows for local development, you also just fixed lots of bad coupling, because now you can easily switch your whole CI/CD backend from any of these choices to any of the the others.

DAGs Rule Everything Around Me


You probably already know that directed acyclic graphs aren't just for Airflow, and these come up practically any time you're thinking about dependency trees, scheduling, and lots of other stuff. DAGs are pretty much what make does, and it's good at it.

For lots of automation work, and especially lifecycle automation, DAGs of tasks/prerequisites are the most natural way model things. Engines for resolving "desired state" like Terraform/Ansible are great at what they do, but they are not really built for describing DAGs. If you're spending lots of time messing around with hooks and plugins, it might be a sign that you're struggling to turn your desired-state-management tools into DAG processors.

Glue Code, Notebooking, DataScience


Opinions will differ about whether it is good or pleasant, but glue code is increasing everywhere.

Different projects describing notebooks/pipelines are sometimes tidy, or, sometimes admonish you to complete 12 steps of copy-paste to get started. Most people have probably also seen wildly disorganized notebooks shoved into production directly as pipelines or pipeline-components at one time or another. You can try to stop that with culture and code reviews but it's worth asking: what can be done with tooling so that the "good" way is the same as the "easy" way?


Believe it or not, related topics in this neighborhood actually include things like the reproducibility problem in science, the idea of executable papers in CS and the fact that no one cares about your project repo. Because let's be real: No one wants to look at your project because in the best case it's still a mess of docs/examples/tests/support data, and people have to tyry to understand how it's all organized before they can even try to use it. Usually, even after a project is understood, it's not like you can use it directly, it must be adapted or edited or installed or configured, and now we're down the rabbit hole of dependencies.

So, part of what we're really talking about is the ability to export a project as a tool with minimum extra effort, and that won't be easy without some structure imposed. But it would be great if that structure doesn't always strictly require some kind of proprietary format that guarantees you can never even prototype flows locally. And it would also be nice if it didn't make it harder to just edit code. Notebook-driven development has obvious benefits and many problems that are perhaps less obvious.

To point the microscope at data-engineering instead of data-science.. most people have probably also seen the phenomenon where some team is struggling with enterprise java frameworks and/or cloud-based services for 3 months, just to deliver a CSV -> JSON conversion tool. Maybe that's a necessary problem to be solving and maybe not.


So what does any of this have to do with compose.mk?

Well.. none of this is to suggest that mini ETL demos mean you can throw away platforms like Airflow, Databricks, AWS Glue / Step Functions, etc. (What's more realistic is that maybe your compose.mk-backed DAG is just a node in a bigger Airflow-DAG, or that Airflow/ECS/Fargate/Databricks calls your tasks piece-wise.)

And obviously local-first development approaches cannot completely replace services like AWS lambda, sagemaker, or glue. But.. having a parallel implementation of a simpler pipeline that is "close enough" is often a good idea for prototyping, development, and testing.

Equally obvious, something like compose.mk's self-extracting archives will not cure the reproducibility crisis either. And weird self-contained polyglots are not always a great idea!

What is a good idea though is making structured prototyping simpler and more accessible, while ensuring that those prototypes can have versioned components, fully specified runtime requirements, and are actually a positive step towards shipping.

For a closer look at what compose.mk can offer in terms of projects-as-tools and environments-as-tools, see the notebooking demo, the GUI demo, and the RAG demo.

Just Use Tasks in Containers


Consider a typical workflow for devops:

  1. You want new software available on your Kubernetes.
  2. You go read the installation instructions, which tell you to helm repo add .. then helm install .. or to kubectl something.
  3. Great, you already use ArgoCD, but there's something going on with the chart. Maybe helm inputs need a load-balancer endpoint from terraform output and it's not available ahead of time, or maybe there's a helm version mismatch.
  4. You dutifully begin to translate helm install instructions into another preferred ecosystem's tool wrappers. (Maybe it's a terraform helm provider or an ansible helm module or a jenkins container-step).
  5. That's not a ton of work by itself, but soon you find you're deep into fiddling with the mediating abstraction.
  6. Looks like you need another version of your core tooling (yay, Terraform upgrade)
  7. Or maybe another version of the plugin (yay, sifting Jenkins ecosystem abandonware; yay, fighting with custom-tool manifests for Argo)
  8. Or maybe you have to pull in lots of config you don't need to access some config that you do need (yay, our Ansible needs a submodule full of irrelevant plugins/tasks/inventory just to get started)
  9. You realize your mission was to deploy something, not change the way deployments work. Oops, any upgrades or changes at the "top" or the "outside" of your tool chain like this might fix one thing and break something else. Now changes are hard, or scary, or both.

After you get something working, you realize you're now relying on a remote execution environment that might itself depend on multiple layers of event-triggering happening correctly, and.. if it ever breaks then no one can ever actually run anything locally. Yikes.

Simply using tool containers as directly as possible is often a better way, and it won't rule out your GitOps in general or your specific platform of choice.

Unlike the "helm bolted on to terraform/ansible/jenkins" approach, using compose.mk makes it easy to get closer to your tools, and directly use multiple versions of the actual tools you care about, and without affecting existing work. If a tool breaks, you can debug that issue directly, without re-running ansible you don't care about, without digging in terraform core dumps, without trying to connect to weird remote runtimes via ssh or kubectl, and without looking up that one weird environment variable to enable plugin traces. If you need a tool then you just use it.

Forever Development Considered Harmful


While compose.mk is committed to bugfixes, and will continue to change in cases where it feels incompatible, incomplete, or internally inconsistent.. the goal is to freeze development as much as possible after it's mature enough to be extensible. It will never change simply to accomodate "one more" domain specific thing, e.g. no increase in surface area for use cases like documentation builds, or cluster lifecycles, because those things are easy to express as extensions.

As another example, support for something like podman won't ever bloat the code-base until/unless it's very clearly more ubiquitous than docker is. In some ways, compose.mk is a proof of concept and a reference, so that usage of docker or even make itself is an implementation detail that could be swapped out in a fork. Hate to see an admittedly gruesome transpiler hackjob written in awk? Rewrite it by all means.. but if it's more elegant at the expense of becoming larger or less portable, it probably can't be merged. See also the fork-and-forget advice for installation, and remarks re: contributing.


Why fork-and-forget advice for compose.mk in an age of "pull the latest" and like-and-subscribe? Well, you know how it is out there. Lately we can't even trust our web-browsers to keep copy/paste working, and much has been written elsewhere about things like the front end treadmill.

In some software ecosystems maybe over a third of all development effort is spent fighting with dependencies and upgrading just for the sake of it. Maybe devs deserve this for living near the cutting edge.. but users are automatically opted-in to updates almost everywhere, and now we just live with the threat that the one critical workflow that you trusted your SaaS/browser/phone for might break at any time. Change is fine when it's necessary. Completely useless churn that creates busy-work for millions of people right up until a project is killed and a community abandoned is bad.

Obviously there is another way. Most operating systems do better, even with changes, thanks to curmudgeons like Linus and hard-line "no regressions" policies. Many AWS core services are also great examples, and show decades of miraculous uninterrupted backwards compatibility while they continue to improve. And thanks to good design, unix CLI tools are still relevant with almost no changes after 50 years. For projects closer to the top of the stack though, one also wonders.. should any project that's more than a decade old really have a core that's still under active development? Wouldn't good design mean growing a plugin or extension system that could effectively host ongoing development instead?

Polyglots Considered Pretty Reasonable


Anyone who has ever wanted to leave their ORM behind in favor of writing naked SQL knows this. And anyone using FFI style programming in any number of languages, across any number of languages. (See also the julia polyglot demo if you're into that sort of thing.)

Some will find it controversial, but the biggest and best example of "polyglots considered reasonable" is something you'll find in distributed application design, next door to topics in microservices, heterogenous queue workers, serverless FaaS, etc. The advantages and disadvantages of this type of thing have been much discussed, and it's definitely true that deciding good service-boundaries is hard, and it's also hard to even make good general statements about simple stuff like min/max values for service line-count.

Meanwhile though, the polyglot design approach for non-distributed applications has basically never been tried, unless we're counting edge-cases like notebooks, shell-scripting, build-processes, etc. Those edge-cases are notoriously adhoc though and usually lacking useful structure, versioned dependencies, strategies for code re-use, etc. No wonder, these kinds of undistributed polyglots are basically considered to be a necessary evil even when they are used.

Although mere philosophy usually doesn't help to change minds, perhaps better tooling could. One of the core features for compose.mk is providing lots of ways to incorporate foreign tools/languages, organize them, and orchestrate across them. And compared with microservice guidelines, maybe rules of thumb for application-level polyglot designs are also easier to come by. Here's a hot take:

What's a maximum length for foreign code?
Well, ~20-30 LOC is probably a reasonable goal for any discrete chunk of foreign code, because this is the typical advice for maximum length of methods/functions.
How many distinct foreign code objects?
Probably at least one, because if not then you might be writing very complex shell script and/or obscure and tricky awk, make, etc, which is exactly what compose.mk should help you to avoid! If you need more than 3 foreign code blocks though and they are all in the same language, maybe you should just be working directly in that language?
How many distinct foreign languages?
As many as it takes, but not too many, so probably just ask your friends first and try to be cool about it. Stay away from brainfuck in your build scripts but otherwise.. you know, have some fun and get weird with it as long as you're building something discrete, small, and useful, and getting something new that's actually worth any docker-pull that's required.

A big part of the benefit of polyglots is that you can responsibly leverage a language for gremlins for that one thing that it's perfect at, or perform amazing feats of matrix golf with APL, or import a little antigravity into your .net shop. In some cases, you'll find that judicious and tactical usage of different languages not only makes the impossible possible, it makes the impossible easy.

Compose Considered Pretty Reasonable


People have long memories about the tech that has burned them in the past, and docker compose is a great example of that.

For docker compose, the early days had much crashing and confusion stemming from a dependency on python/pip, and from this came much instability. For a long time there was also churn in configuration schema. But compose has been a part of docker core for a while now and the schema is stable to the point that even the version: key is obsolete. Other sins of the past might include YOLOing a compose file onto EC2 to try and get by without a kubernetes cluster, but managed clusters are easy to get these days. A compose-file is not a "production ready" way to manage services, but the situation is completely different if you're using it exclusively for managing tool containers. ( This does leave open the question of whether pipelines should be considered as services or as DAGs of tools though, and the answer is that it depends..)

DIND Considered Pretty Reasonable


People have long memories about the tech that has burned them in the past, and DIND is also great example of that.

DIND historically was a great way to cause all kinds of chaos including layer corruption, performance problems, etc. But the first search result for "docker in docker" as of right now is probably still about problems that are now 10 years old, and times have changed. Meanwhile Rootless DIND is lately about 5 years old. Another thing that confuses the issue is real docker in docker (i.e. running a sandboxed docker daemon) vs what you might call docker out of docker aka DooD aka bind-mounting the host docker socket.

So what does any of this have to do with compose.mk? Well, the embedded TUI is itself dockerized, and uses containers for UI elements. Technically with the right setup, container dispatch can work in a scenario where "private targets" in one container are still able to call private targets in another (although good organization will tend to avoid this anyway). Things like that are possible via DooD, but it also works from e.g. GithubActions because of true DIND. And as discussed re: docker compose in the last section.. deciding how crazy this is depends a lot on whether we're talking about services or tool-containers, and evaluating whether DIND or DooD is a major performance problem or a huge security risk depends on your use-case.


For CI/CD platforms, finding DIND support available out of the box is pretty normal. For platforms like Argo / Atlantis / Airflow / Databricks, of course there's a way to dispatch a container as a step. But what about lightweight usage of a tool container from inside a step? It turns out that supporting this isn't usually that hard, because there's almost always a way to bring-your-own-container for the platform-base itself. Or in the absolute worst case, platforms that don't allow you to bring your own image will somehow allow bind-mounting tricks, remote docker-hosts, or a side-car.

This suggests that for the type of scenario described in Just Use Tasks in Containers, you could go through the effort N times to add support for N custom tool versions. But if you're at that level of customization anyway, why not just support another layer of docker and do it once? Obviously if additional deep platform-setup must be involved, DIND/DooD does remain a subtle issue. But at the same time: DIND/DooD is almost always feasible, and it's not automatically bad!

References



  1. Recursive make Considered Harmful 

  2. Non-recursive Make Considered Harmful 

  3. Unreasonable effectiveness of Makefiles 

  4. make vs just 

  5. Contrary to popular belief, art doesn't need to be beautiful, especially if it's helping the viewer to examine their own sense of what beauty is. Sometimes art is also about demonstrating mastery of materials. 

  6. The language agnostic all purpose incredible makefile