Stage 0: workspace quality gate
Stage 0 is Loom's pattern for catching breakage early. This workflow runs locally to validate your workspace before changes reach CI or other consumers. It produces deterministic, pointer-first artifacts so you can jump directly to the failing unit instead of scrolling through logs.
loom run --local --workflow .loom/workflow.yml
This page walks through the actual Stage 0 workflow used in the Loom repository itself.
Prerequisites
| Requirement | Why |
|---|---|
| Loom installed | Runs the workflow via loom run --local. See CLI → run. |
| Docker available | The workflow builds and runs Docker images. The Docker daemon must be reachable. |
| Linux target | Local execution targets linux in the current release. On macOS or Windows, run inside a Linux environment (VM, container, or CI runner). |
The workflow
This is the canonical .loom/workflow.yml from the Loom repository:
version: v1
stages:
- deps
- ci
default:
cache:
- name: pnpm
key:
prefix: loom-cache
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
policy: pull-push
when: always
- name: go
key:
prefix: loom-cache
files:
- go.work
- go.work.sum
- apps/**/go.sum
- libs/**/go.sum
paths:
- .go
policy: pull-push
when: always
variables:
PNPM_STORE_DIR: .pnpm-store
GOPATH: .go
GOCACHE: $GOPATH/cache
GOMODCACHE: $GOPATH/pkg/mod
target: linux
build-nix-image:
stage: deps
target: linux
image:
name: loom:nix-local
build:
context: .
dockerfile: nix/Dockerfile
cache: []
script:
- echo "built by loom"
test-services:
stage: deps
target: linux
cache: []
image: postgres:16
secrets:
POSTGRES_PASSWORD:
ref: keepass://beepbeepgo/loom-platform/loom#services/test/db:password
file: false
services:
- name: postgres:16
variables:
POSTGRES_DB: testdb
POSTGRES_USER: runner
POSTGRES_HOST_AUTH_METHOD: trust
script:
- >-
i=0; until pg_isready -h postgres -U runner -d testdb;
do i=$((i+1)); [ "$i" -ge 30 ] && echo "postgres not ready after 30s" && exit 1; sleep 1; done
- psql -h postgres -U runner -d testdb -c "SELECT 1;"
install-deps:
stage: ci
image: loom:nix-local
variables:
PNPM_STORE_DIR: .pnpm-store
script:
- pnpm i --frozen-lockfile
- pnpm nx run-many -t go-deps-verify
check-task:
stage: ci
image: loom:nix-local
needs:
- install-deps
variables:
PNPM_STORE_DIR: .pnpm-store
cache:
- name: pnpm
policy: pull
- name: go
policy: pull
script:
- pnpm i --frozen-lockfile
- task check
check-pnpm:
stage: ci
image: loom:nix-local
needs:
- install-deps
variables:
PNPM_STORE_DIR: .pnpm-store
cache:
- name: pnpm
policy: pull
- name: go
policy: pull
script:
- pnpm i --frozen-lockfile
- pnpm nx run loom-platform:check
What each job does
Stage: deps
| Job | Purpose |
|---|---|
build-nix-image | Builds the loom:nix-local Docker image containing the workspace's deterministic Nix toolchain. Uses the image.build mapping form to build the image before execution. Disables cache (cache: []) since image builds don't benefit from workspace caching. |
test-services | Validates sidecar service infrastructure by running a Postgres health check. Uses services to start a Postgres container alongside the job, and secrets to resolve the database password from a KeePass vault at runtime. |
Stage: ci
| Job | Purpose |
|---|---|
install-deps | Installs pnpm dependencies and verifies Go module checksums. Runs in the Nix toolchain image. Populates the shared cache that downstream jobs pull from. |
check-task | Runs task check (the Taskfile quality gate). Depends on install-deps via needs. Pulls cache in read-only mode (policy: pull) to avoid redundant saves. |
check-pnpm | Runs the Nx check target for the workspace. Same dependency and cache strategy as check-task. |
Features demonstrated
This workflow uses most of Loom's v1 feature set:
| Feature | How it's used | Learn more |
|---|---|---|
| Stages | deps runs first, then ci | Syntax → stages |
| Default configuration | default sets shared cache, variables, and target for all jobs | Syntax → default |
| Named multi-cache | Two caches (pnpm, go) with independent keys and paths | Cache |
| Cache policy | deps jobs use pull-push; ci jobs use pull only | Syntax → cache:policy |
| Glob patterns in cache keys | apps/**/go.sum and libs/**/go.sum match Go modules anywhere in the tree | Syntax → cache:key:files |
| Image build | build-nix-image uses the image.build mapping to build before running | Docker provider |
| Services | test-services runs Postgres as a sidecar container | Syntax → services |
| Secrets | test-services resolves a password from KeePass at runtime | Secrets |
| DAG dependencies | check-task and check-pnpm use needs to depend on install-deps | Syntax → needs |
| Variable references | GOCACHE: $GOPATH/cache references another variable | Variables |
| Provider routing | build-nix-image with no image runs on host; others use Docker | Providers |
Caching strategy
The workflow is designed so repeat runs are significantly faster than cold runs:
| Cache | Key derived from | Paths cached | Strategy |
|---|---|---|---|
pnpm | pnpm-lock.yaml | .pnpm-store | Dependency changes invalidate. deps and install-deps populate; ci jobs pull only. |
go | go.work, go.work.sum, **/go.sum | .go | Go module changes invalidate. Same populate/pull pattern. |
The ci stage jobs set policy: pull to avoid redundant cache saves — the cache was already populated by install-deps. This reduces I/O on parallel jobs.
If you change inputs that should invalidate the cache (install flags, toolchain versions, additional build outputs), update cache.key.files accordingly.
Why this pattern matters
Stage 0 catches breakage before it propagates. Compared to ad-hoc local scripts:
| Ad-hoc scripts | Stage 0 with Loom |
|---|---|
| Different on every developer's machine | Repeatable workflow checked into the repo |
| Failures produce raw console output | Failures produce structured artifacts (receipts, manifests, events) |
| Debugging means scrolling through logs | Debugging means following pointers to the exact failing step |
| Cache strategy is manual or nonexistent | Cache strategy is declarative and reproducible |
Diagnosing failures
When a Stage 0 run fails, follow pointers — not logs:
- Pipeline summary — check
pipeline/summary.jsonfor overall status and exit code. - Pipeline manifest — check
pipeline/manifest.jsonto identify which job failed. - Job manifest — check
jobs/<job_id>/manifest.jsonfor the failing step or system section. - Failing events — open the
events.jsonlfile pointed to by the manifest for the specific failure details.
All artifacts are under .loom/.runtime/logs/<run_id>/.
For the full triage process, see:
- Diagnostics ladder — step-by-step failure triage
- What to share — what to include when asking for help
- Runtime logs contract — artifact layout and field reference
Running the gate
Validate the workflow structure:
loom check
Run the full Stage 0 pipeline:
loom run --local --workflow .loom/workflow.yml
Planned
- Remote runners — run the same workflow on remote infrastructure without bespoke wrappers.
- Richer structured events — more granular step-level and system-section events for deeper diagnostics.
Next steps
- Getting Started → Hello Loom — write your first workflow
- Syntax (v1) — full keyword reference
- Cache — cache key design and operational constraints
- Docker provider — image builds, workspace mounting, and sidecar services
- Secrets — runtime secret resolution