Last verified April 2026 · 9 min read
Docker build optimisation for CI: the recipes that pay back
Docker builds are often the slowest job in a CI pipeline. An 8-minute Docker build on every PR, 80 PRs/day, is 640 minutes/day of billed time. With layer caching properly configured, that drops to under 2 minutes. These are the six recipes that pay back.
Recipe 1: Multi-stage builds
Multi-stage builds separate the build environment from the runtime image. The final image is smaller, the build cache is more granular, and sensitive build tools do not ship to production.
# Multi-stage: builder + runtime FROM node:22-alpine AS builder WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci --only=production COPY . . RUN npm run build FROM node:22-alpine AS runtime WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules EXPOSE 3000 CMD ["node", "dist/server.js"]
Recipe 2: GHA layer cache
The GitHub Actions cache backend stores Docker layers between runs. On a cache hit, unchanged layers are restored instead of rebuilt. Typical result: 8-minute build drops to 90 seconds.
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.ref == 'refs/heads/main' }}
tags: ghcr.io/${{ github.repository }}:latest
cache-from: type=gha
cache-to: type=gha,mode=maxNote: GHA cache counts toward your 10 GB repository cache limit. For large images, use registry cache (GHCR, ECR, R2) instead.
Recipe 3: buildx multi-arch (ARM + x86)
Combine multi-arch builds with caching to support both ARM and x86 deployments from a single pipeline. Cross-reference ARM runners for the full cost case.
- uses: docker/setup-qemu-action@v3
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
cache-from: type=registry,ref=ghcr.io/${{ github.repository }}:buildcache
cache-to: type=registry,ref=ghcr.io/${{ github.repository }}:buildcache,mode=max
push: true
tags: ghcr.io/${{ github.repository }}:latestRecipe 4: RUN --mount=type=cache
BuildKit cache mounts persist package manager caches across builds without storing them in the image layer. One line, massive saving on dependency-heavy builds.
# syntax=docker/dockerfile:1
FROM node:22-alpine
WORKDIR /app
COPY package.json package-lock.json ./
# Cache npm store between builds (does NOT add to image layers)
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
COPY . .
RUN --mount=type=cache,target=/root/.npm \
npm run buildRecipe 5: Distroless final images
Distroless images contain only your application and its runtime dependencies, not a full OS. Smaller images: faster pulls on deploy, smaller attack surface, faster container startup in serverless/Kubernetes environments.
FROM node:22-alpine AS builder # ... build steps ... # Distroless Node.js runtime FROM gcr.io/distroless/nodejs22-debian12 WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules CMD ["dist/server.js"]
Docker CI anti-patterns
COPY . . before COPY package.json
Copying all source files before the package install layer means every code change invalidates the npm install cache. Always copy lockfile first, install, then copy source.
No .dockerignore
Without .dockerignore, the build context includes node_modules, .git, and build artifacts. The context can exceed 500 MB. This slows every build, regardless of layer caching.
Single-stage 2 GB images
A 2 GB production image takes 60-90 seconds to pull on every deploy. Multi-stage builds with distroless finals typically produce images under 200 MB.
Pulling base images fresh every build with --no-cache
Explicitly disabling the BuildKit cache defeats all the optimisations above. Only use --no-cache when debugging cache correctness issues.
Cost worked example
TEAM RUNNING 80 PRS/DAY WITH DOCKER BUILDS