Skip to main content

Docker provider

The Docker provider runs jobs inside containers during loom run --local. It gives each job a clean, reproducible environment defined by a container image.

When Loom uses Docker

Loom selects the Docker provider when the resolved job has a non-empty image: value (or an image.build block):

test:
stage: ci
target: linux
image: alpine:3.20
script:
- echo "running in container"

If image: is omitted, Loom uses the Host provider instead.

Prerequisites

  • Docker installed on the local machine.
  • Docker daemon running and reachable (docker info should succeed).
  • The job's image must be pullable from a registry or already present locally.

Backend

The Docker provider communicates with the Docker daemon exclusively through the Docker Engine SDK (Go client library). There is no CLI shell-out backend — Loom talks directly to the Docker Engine API.

No backend selection flag is required. If Docker is reachable, the provider works.

Workspace mount modes

When the Docker provider creates a container, it mounts the isolated workspace into the container at /workspace. Loom supports two mount strategies:

ModeDescription
bind_mountBind-mounts the host workspace directory directly into the container. Changes inside the container are visible on the host immediately.
ephemeral_volume (default)Creates a Docker volume, seeds it with a copy of the workspace, and mounts that volume into the container. After execution, the volume is removed (no full workspace sync-back to host).

Configuration

Set the mount mode with either:

  • CLI flag: --docker-workspace-mount <bind_mount|ephemeral_volume>
  • Environment variable: LOOM_DOCKER_WORKSPACE_MOUNT=<bind_mount|ephemeral_volume>

Precedence: flag > environment variable > default (ephemeral_volume).

# Default: ephemeral_volume
loom run --local --workflow .loom/workflow.yml

# Explicit bind_mount
loom run --local --docker-workspace-mount bind_mount --workflow .loom/workflow.yml

# Environment variable (flag overrides if both set)
LOOM_DOCKER_WORKSPACE_MOUNT=bind_mount loom run --local --workflow .loom/workflow.yml

When to choose each mode

Use caseRecommended mode
Standard local development, fast iterationbind_mount — lowest latency, changes are immediately visible
Jobs that modify many workspace files (risk of permission or ownership issues)ephemeral_volume — isolates container writes from the host filesystem
Large workspaces where copy overhead is acceptable for stronger isolationephemeral_volume
CI-like fidelity where you want volume-based isolationephemeral_volume

How ephemeral_volume works

  1. A Docker volume named loom-job-workspace-<job-id> is created.
  2. A helper container (busybox:1.36.1) copies the host workspace into the volume.
  3. The job container mounts the volume at /workspace.
  4. After the job completes, the volume is removed.

If any step in this process fails (volume creation, image pull for the helper, or seed), the volume is cleaned up and the job fails with a descriptive error.

Runtime behavior

The Docker provider:

  1. Pulls (or verifies) the image — checks if the image exists locally; pulls from the registry if missing.
  2. Mounts the workspace — mounts the snapshotted workspace into the container at /workspace using the configured mount mode.
  3. Sets working directory — the container's working directory is /workspace.
  4. Injects variables — passes job variables: as container environment variables. Loom also sets LOOM_PROVIDER=docker automatically.
  5. Executes scripts — runs sh -lc "<instrumented script>" inside the container (or sh -x -c when debug tracing is enabled via LOOM_DEBUG_TRACE).
  6. Captures output — records stdout, stderr, and exit code into structured runtime events.
  7. Extracts artifacts — if the job defines artifacts, matching files are copied from the workspace to .loom/.runtime/logs/<run_id>/jobs/<job_id>/artifacts/ after execution.

Image builds

Jobs can build their own image by setting image as a mapping with name and a build block:

build-and-run:
stage: ci
target: linux
image:
name: my-org/custom-ci
build:
context: .
dockerfile: Dockerfile.ci
script:
- echo "running in the freshly built image"

Build block fields:

FieldRequiredDescription
contextYesPath to the build context, relative to the workflow file
dockerfileYesPath to the Dockerfile (must be within the build context)
outputNoDocker BuildKit --output spec (e.g. type=docker,dest=/tmp/image.tar)

If image.build is present and the script is empty, the SDK backend builds the image and skips container execution.

Sidecar services

When a job defines services, the Docker provider runs sidecar containers alongside the main job container.

Lifecycle

  1. Network creation — a dedicated Docker network named loom-job-net-<job-id> is created.
  2. Service start — each service container is created and started on that network. If the service includes an alias, it is registered as a network alias so the main container can reach it by hostname.
  3. Readiness wait — the runtime polls each service container for up to 35 seconds (100 ms intervals). If the container defines a Docker health check, the runtime waits for healthy status; otherwise it waits until the container is running.
  4. Main container execution — the job's main container runs on the same network. It can reach services by image name or alias.
  5. Cleanup — after the main container exits, all service containers are force-removed and the job network is deleted, regardless of job outcome.

Example

integration:
stage: ci
target: linux
image: node:20-alpine
services:
- name: postgres:16
alias: db
variables:
POSTGRES_DB: testdb
POSTGRES_USER: runner
POSTGRES_PASSWORD: secret
script:
- npm run test:integration

Service limitations

  • No health-check enforcement for images without HEALTHCHECK. If a service needs startup time, add readiness polling in your script (e.g. wait for a TCP port).
  • No workspace mounts on services. Only the main job container receives the workspace mount.
  • Schema-recognized but unsupported subkeys: docker, kubernetes, and pull_policy pass schema validation but are not yet implemented. Using them produces a validation error.

Job artifacts

Jobs can declare artifacts to extract specific files from the workspace after execution. Extracted files are copied to the run's structured log directory alongside other runtime artifacts.

build:
stage: ci
target: linux
image: node:20-alpine
script:
- npm run build
artifacts:
paths:
- dist/
exclude:
- dist/**/*.map
when: on_success

Extracted artifacts are written to:

.loom/.runtime/logs/<run_id>/jobs/<job_id>/artifacts/

When at least one file is extracted, an archive is also produced at jobs/<job_id>/artifacts/artifacts.tar.gz.

For the full artifacts schema, see Workflow syntax → artifacts.

Workspace mount and common gotchas

Mount path

The workspace is mounted at /workspace. All job scripts execute from that directory.

File ownership

Containers typically run as root. Files created inside the container will be owned by root on the host. If host tooling needs to modify those files afterward, you may need to fix permissions:

script:
- make build
- chown -R "$(stat -c '%u:%g' .)" /workspace/dist

Line endings

If you see bash: $'\r': command not found, your scripts have Windows line endings (CRLF). Convert them to LF before running.

Security considerations

  • No implicit host environment inheritance. Docker jobs do not inherit your local shell environment. Put required values in workflow/job variables: so they are explicitly passed into the container.
  • Secret visibility. Any variable passed into a Docker job is potentially visible in logs if echoed. Avoid printing secrets; follow your organization's secret-handling policy.
  • File-based secrets are bind-mounted read-only into the container at /tmp/loom-secret-<name>-<ordinal>, and the corresponding environment variable is rewritten to point to the container path.
  • For safe log-sharing practices, see What to share.

Confirming Docker provider ran

Verify from runtime artifacts, not console output:

  1. Open .loom/.runtime/logs/<run_id>/pipeline/manifest.json — find the job pointer.
  2. Open .loom/.runtime/logs/<run_id>/jobs/<job_id>/manifest.json — find the provider system section.
  3. Check system/provider/events.jsonl for provider selection, image name, and container details.
  4. The job's variables include LOOM_PROVIDER=docker when the Docker provider is active.

For the full log structure, see the Runtime logs contract.

Troubleshooting

Error / symptomCauseFix
docker daemon unavailableDaemon not runningStart Docker Desktop / daemon, then retry
Job ran on host unexpectedlyMissing image:Run loom compile and verify the resolved job has image:
Image pull failsWrong image name, auth issue, or rate limitConfirm image name/tag and registry credentials
Permission denied on workspace files after runContainer created files as rootFix ownership in script or use a non-root image
docker workspace volume create failedVolume creation failed (ephemeral_volume mode)Verify Docker daemon is running and has permission to create volumes
docker workspace volume cleanup failedVolume removal failed during cleanupCheck Docker daemon health and volume permissions

Limitations

  • Docker provider is selected per job based on the resolved image: value.
  • Local runtime behavior may differ from future remote execution modes.