Known Limitations and Issues


KUBECONFIG should already be set


We fake it for builds/tests, but note that KUBECONFIG generally must be set for things to work! Sadly, lots of tools will fail even simple invocations like helm version or even --help if this is undefined or set incorrectly, and it will often crash with errors that aren't very clear.

Dealing with per-project KUBECONFIGs is best, where you can set that inside your project Makefile and be assured of no interference with your main environment. Alternatively, use bashrc if you really want globals, or provide it directly as environment variable per invocation using something like KUBECONFIG=.. make <target_name>.

Working Directories


By default, the compose file shares the working directory with containers it's using as volumes. This means files you're using should be inside or somewhere below the working directory! The compose file itself can be anywhere though, so instead of keeping it in your projects source-tree you can decide to deploy it separately to /opt or ~/.config

Docker and File Permissions


The usual problem with root-user-in-containers vs normal-user on host and file permissions. The alpine base is a container using root, as are many other things. And there is a long-standing known bug in the compose spec that makes fixing this from the compose file hard.

Invoking compose exclusively from a Makefile actually helps with this though. By default with compose.mk, DOCKER_UID | DOCKER_GID| DOCKER_UGNAME variables are set and available for use in k8s-tools.yml. This works slightly differently for Linux and MacOS, based on what messes things up the least, but YMMV. With Linux, it looks something like this:

export DOCKER_UID?=$(shell id -u)
export DOCKER_GID?=$(shell getent group docker | cut -d: -f3 || id -g)
export DOCKER_UGNAME?=user

If you're not working with Makefiles at all, you can export appropriate values in .bashrc or .env files you use. If none of this is appealing, and you mix host-local and dockerized usage of things like helm, then you may end up with weird file ownership. You can fix this if it comes up using sudo chown -R $USER:$USER ..

MacOS, Docker Sockets, and DinD


As long as docker is working, any kind of setup (Docker Desktop, Rancher Desktop, Colima) can work with compose.mk for container-dispatch. But for working with k8s-tools.yml containers specifically, the docker-socket sharing must also be working. If you're having problems that might be related to this, first make sure that your setup can correctly run this command:

$ docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker ps

If the volume mount is working correctly, the result here should look the same as docker ps from your host. If your docker socket is in a different place (like ~/.rd/docker.sock for Rancher Desktop), you may need to symlink the file.

MacOS Docker desktop can be especially annoying here, and it seems likely the same is true for windows. YMMV, but as of 2024 sharing the socket may mean required changes from the UI preferences, and/or enabling/disabling virtualization backends. Another way the problem can manifest is an error like this:

You can configure shared paths from Docker -> Preferences... -> Resources -> File Sharing.
See https://docs.docker.com/desktop/mac for more info.

If you want better parity with docker in Linux, you might like to checkout Colima/Rancher.

Pipes & Temp Files


Working with streaming pipes generates temporary files with mktemp, removing them when the process exits with trap. Pure streams would be better. Also in many cases tmp files need to be in the working directory, otherwise they can't be shared by docker volumes. Moving to a temp-dir based approach would be better.