Docker

Container runtime for isolating and reproducing project environments.

Goals

Package the podcast pipeline — its Go binary (axe), Python TTS stack (KittenTTS, phonemizer, espeak-ng), ffmpeg, and Bun runtime — into a reproducible, self-contained image that runs identically regardless of host state.

Effectiveness

Works. Once the image builds correctly the pipeline runs cleanly inside it and the isolation from host Python/Node versions is worth it. The multi-stage build keeps the final image lean.

What made it effective

Bonus utility

docker logs <container> --tail N is the primary pipeline visibility tool during long runs — more reliable than trying to keep a TTY open.

Friction / pain points / surprises

Local path dependencies in package.json don't resolve at build time. bun install inside the Dockerfile fails when a dependency like @regular/otel-run points to a local workspace path ("../otel-run") that isn't copied into the image layer. Fix: remove bun install from the Dockerfile entirely and run it in entrypoint.sh after the workspace volume is mounted.

unzip must be installed explicitly for Bun's installer. curl -fsSL https://bun.sh/install | bash silently fails if unzip is missing — the install script extracts a zip. Not in the default Debian slim image. Add it to the apt-get install list.

Go version pinning in go.mod causes build failures. The axe fork requires Go ≥ 1.25.0. golang:1.24-bookworm fails with a clear version mismatch error, but only during go build — the image layers up until then without issue. Pin the build stage to a version that satisfies go.mod.

The Docker socket isn't available by default inside a container. Running docker build or docker run from inside an agent container requires /var/run/docker.sock to be mounted and the container user to have permissions on it. Without --group-add $(stat -c %g /var/run/docker.sock), all docker commands fail silently with a permissions error. This is a host-level container configuration, not something the agent can fix itself.

Stale image layers serve old binaries. docker build caches aggressively. If a dependency version changes (e.g., the axe branch is updated), --no-cache is required to force a fresh pull. Without it the image builds from cache and silently runs the old binary.

wrangler skips unchanged files in dist/, causing stale deploys. When the pipeline runs inside Docker, dist/ is populated inside the container filesystem. On a subsequent run the container may have an outdated dist/ that wrangler considers unchanged relative to the Pages deployment, and uploads nothing. Fix: either always clean dist/ before building, or run the deploy step from outside the container where dist/ is known fresh.