Workflow YAML syntax reference (v1)
Loom workflows are YAML files validated as schema v1. This document lists
the configuration options for the Loom .loom/workflow.yml file — the file where
you define the jobs that make up your workflow.
- Default location:
.loom/workflow.yml - Validate:
loom check - Compile (resolve includes + templates):
loom compile --workflow .loom/workflow.yml - Run (local):
loom run --local --workflow .loom/workflow.yml
When you are editing your workflow file, validate it with loom check.
Loom workflow configuration uses YAML formatting, so the order of keys is not important unless otherwise specified. YAML anchors, aliases, and explicit tags are rejected — Loom enforces deterministic YAML to keep workflows auditable and diff-friendly.
Keywords
A Loom workflow configuration includes:
-
Global keywords that configure workflow structure and behavior:
Keyword Description versionSchema version. Must be v1.stagesThe names and order of the workflow stages. includeImport configuration from local template YAML files. workflowControl whether the workflow runs (pipeline gating). defaultDefault values for job keywords. variablesDefine default variables for all jobs in the workflow. -
Jobs configured with job keywords:
Keyword Description stageThe stage a job belongs to. targetExecution target platform. scriptList of shell commands to execute. extendsInherit configuration from a template job. needsDeclare DAG-style dependencies between jobs. imageRun the job inside a Docker container. runner_poolSelect a runner pool for remote execution. variablesJob-scoped variables that override defaults. secretsDeclare secrets resolved at runtime. servicesSidecar containers that run alongside the job. invariantPolicy configuration for job-level checkpoints. cacheCache directories between runs for performance. artifactsExtract files from the workspace after execution.
Global keywords
Some keywords are not defined inside a job. These keywords control workflow structure or import additional configuration.
version
Use version to declare which schema version this workflow file conforms to.
The validator uses this value to select the correct set of validation rules.
Keyword type: Global keyword.
Supported values:
- Must be exactly the string
v1.
Example of version:
version: v1
Additional details:
versionis required. Omitting it produces a schema error.- Future schema versions will ship with explicit migration notes and will use a
different version string (e.g.
v2). Within v1, changes are intended to be additive and backwards-compatible.
stages
Use stages to define the ordered list of stages that jobs belong to.
Jobs in the same stage run in parallel. Jobs in the next stage run after all jobs
in the previous stage complete successfully.
Keyword type: Global keyword.
Supported values:
- A non-empty YAML sequence of stage name strings.
- Each stage name must match:
^[a-z][a-z0-9_-]{0,31}$(lowercase, starts with a letter, up to 32 characters, using only letters, digits, hyphens, and underscores). - Stage names must be unique — duplicates are rejected.
Example of stages:
stages:
- deps
- ci
In this example:
- All jobs in
depsexecute in parallel. - If all jobs in
depssucceed, thecijobs execute in parallel. - If all jobs in
cisucceed, the workflow is marked as passed.
If any job fails, the workflow is marked as failed and jobs in later stages do not start.
Additional details:
stagesis required. Omitting it produces a schema error.- The order of items in
stagesdefines the execution order for jobs. Jobs in the same stage can run concurrently; jobs in the next stage wait for all jobs in the previous stage to complete. - If a stage is defined but no jobs reference it, the stage is silently ignored.
- Stage names are validated against the regex pattern at schema time. Invalid names
produce an error like:
rename stage to match ^[a-z][a-z0-9_-]{0,31}$.
Related topics:
stage(job keyword) to assign a job to a stage.
include
Use include to import template YAML files into your workflow configuration.
You can split a large workflow into multiple files to increase readability or
reduce duplication across jobs.
Included files are merged with the main workflow file. Includes are resolved
before validation and before template extension (extends).
Keyword type: Global keyword.
Supported values: A YAML sequence of include entries. Each entry supports only
the include:local subkey.
Example of include:
include:
- local: .loom/templates/common.yml
- local: .loom/templates/languages/node.yml
Additional details:
- Include files can themselves contain
includeentries (nested includes). Cycles are detected and rejected with a descriptive error showing the include chain. - Included files are merged in order: earlier includes are applied first, then later includes, then the main workflow file. If the same key appears in multiple files, the last definition wins (for root-level scalars and sequences) or keys are merged recursively (for mappings).
- Loom resolves includes, then applies
defaultto all jobs, then resolvesextendschains.
Related topics:
- Includes and templates for patterns and recommended directory structure.
include:local
Use include:local to include a template file from the same repository.
Keyword type: Global keyword.
Supported values:
- A file path string that:
- Starts with
.loom/templates/ - Ends with
.ymlor.yaml - Does not contain
..(path traversal is rejected)
- Starts with
Example of include:local:
include:
- local: .loom/templates/common.yml
Multiple includes:
include:
- local: .loom/templates/common.yml
- local: .loom/templates/languages/node.yml
- local: .loom/templates/jobs/lint.yml
Additional details:
- The path restriction (
.loom/templates/prefix, no..) exists for security and reviewability: all included files are checked into the repo under a known directory, making them easy to audit and harder to smuggle from unrelated paths. - If you want to use a template from elsewhere, vendor it into
.loom/templates/and review it like any other code change. - Each include entry must be a mapping with only the key
local. Other keys (such asremote) are not yet supported and will produce an error.
Related topics:
- Includes and templates for recommended structure and debugging resolution.
workflow
Use workflow to control whether the workflow should run at all. Today, workflow
supports only the rules subkey for conditional gating.
Keyword type: Global keyword.
Supported values:
- A mapping with the optional key
rules. No other keys are allowed underworkflow.
Example of workflow:
workflow:
rules:
- if: $CI_COMMIT_BRANCH == "main"
Related topics:
- Rules for details on rule expression syntax and behavior.
workflow:rules
Use workflow:rules to define conditions that determine whether the workflow runs.
Keyword type: Global keyword.
Supported values:
- A YAML sequence of rule mappings. Each rule must contain:
if: a non-empty string condition expression.
Example of workflow:rules:
workflow:
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
Additional details:
- Each rule entry must be a mapping with only the key
if. Other keys (such aswhenorchanges) are not yet supported at the workflow level. - The
ifvalue must be a non-empty string. Empty strings are rejected. - Rule expression evaluation semantics are planned — today the schema validates the
shape but the runtime behavior of
workflow.rulesis evolving. See Rules.
default
Use default to set default values for job keywords. Each default is applied to every
job that doesn't already define that keyword. This lets you avoid repeating the same
configuration across many jobs.
Default configuration is merged into each job mapping before validation. Job values
override defaults. For nested mappings (like variables and cache), keys are merged
recursively — the job provides overrides and the default provides fallbacks.
Keyword type: Global keyword.
Supported values: A mapping. The following keys are allowed under default:
| Key | Description |
|---|---|
target | Default execution target for all jobs. |
image | Default Docker image for all jobs. |
runner_pool | Default runner pool for all jobs. |
variables | Default variables merged into all jobs. |
cache | Default cache configuration for all jobs. |
services | Default sidecar services for all jobs. |
invariant | Default invariant policy for all jobs. |
Example of default:
default:
target: linux
image: alpine:3.20
variables:
PNPM_STORE_DIR: .pnpm-store
cache:
paths: [.pnpm-store, .nx/cache]
policy: pull-push
when: always
In this example:
- All jobs default to
target: linux, so individual jobs can omittarget. - All jobs default to
image: alpine:3.20unless they set their ownimage. - All jobs inherit
PNPM_STORE_DIRunless they override it in their ownvariables. - All jobs inherit the cache configuration unless they override
cache.
Additional details:
defaultis optional. If omitted, jobs must specify all required keys themselves.default.targetcan satisfy the requiredtargetkey for non-template jobs. Ifdefault.targetis set tolinux, jobs may omittargetand inherit the default.- Default and job configuration do not concatenate — they merge. If the job already has a keyword defined, the job value takes precedence over the default for scalars and sequences. For mappings, keys merge recursively.
- Unknown keys under
defaultproduce an error:remove unknown default key; allowed keys are target, image, runner_pool, variables, invariant, cache, services.
Related topics:
variables
Use variables to define default variables available to all jobs. Variables are
key/value string pairs that are injected into the job environment as shell
environment variables.
Variables defined at the top level act as defaults. Each default variable is available in every job, except when the job already has a variable defined with the same name — the job variable takes precedence.
Keyword type: Global keyword.
Supported values: A YAML mapping of variable name/value pairs:
- Variable names must match
^[A-Z_][A-Z0-9_]*$(uppercase letters, digits, and underscores; must start with a letter or underscore). - Values must be strings.
Example of variables:
variables:
PNPM_STORE_DIR: .pnpm-store
NODE_ENV: production
Additional details:
- Top-level
variablesanddefault.variablesboth provide default values to jobs. The merge happens as part of thedefaultmerge process — seedefault. - Variable values are strings only. Lists, mappings, numbers, and booleans are not supported as variable values and will produce a schema error.
- Variables can reference other variables using
$VARIABLEsyntax in their values (e.g.GOCACHE: $GOPATH/cache), but expansion behavior depends on the runtime provider.
Related topics:
- Variables for precedence rules, provider behavior, and troubleshooting.
- Concepts → Variables for the conceptual model.
- Predefined CI/CD variables for
CI_*andLOOM_*variables injected by the runtime.
Jobs
Any top-level key that is not a global keyword (version, stages, include,
workflow, variables, default) is treated as a job definition. The key must
be a valid job name (see Job naming).
A Loom workflow must contain at least one job. Jobs define the actual work — the commands to run, the environment to run them in, and how they relate to other jobs.
Job naming
Job names must match ^\.?[a-z][a-z0-9_-]{0,63}$:
- Lowercase letters, digits, hyphens, and underscores only.
- Must start with a lowercase letter (or
.for template jobs). - Maximum 64 characters.
Names starting with . are template jobs. Template jobs are not executed
directly — they exist to provide reusable configuration via extends.
Example of job naming:
# Regular jobs (executed)
build-image:
stage: deps
target: linux
script:
- docker build -t myapp .
# Template job (not executed, used via extends)
.node-base:
image: node:20
variables:
NODE_ENV: production
Non-template job requirements
Non-template jobs (names that do not start with .) must include three required
keys:
| Required key | Description |
|---|---|
stage | Must reference a stage declared in stages. |
target | Must be linux (the only supported target in the current release). |
script | A non-empty list of shell command strings. |
If default.target is set, jobs may omit target and inherit the default.
Example of a non-template job:
check:
stage: ci
target: linux
script:
- echo "running checks"
Template job requirements
Template jobs (names starting with .) have relaxed requirements. They must include
at least one of:
script— so they can provide runnable commands to inheriting jobs.extends— so they can chain to another template.
Example of a template job:
.base:
image: alpine:3.20
variables:
MODE: default
This template provides image and variables but has neither script nor extends.
This would produce a schema error: add at least one of script or extends for template jobs.
To fix it, add a script:
.base:
image: alpine:3.20
variables:
MODE: default
script:
- echo "base"
Job keywords
The keywords below are valid inside a job mapping. Each section describes what the schema validator enforces and how the keyword affects runtime behavior.
stage
Use stage to define which stage a job runs in. Jobs in the same stage
can execute in parallel.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
- A string that must match one of the stage names declared in
stages.
Example of stage:
stages:
- build
- test
compile:
stage: build
target: linux
script:
- make build
unit-tests:
stage: test
target: linux
script:
- make test
In this example, compile runs first (in the build stage). After it succeeds,
unit-tests runs (in the test stage).
Additional details:
stageis required for non-template jobs. Omitting it produces:add required key stage for this job.- If the value doesn't match a declared stage, the error is:
set stage to a declared stage from /stages; "xyz" is not declared. - Template jobs may omit
stage. The inheriting job must provide it.
target
Use target to specify the execution platform for a job.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
- Must be exactly
linux. This is the only supported target in the current release.
Example of target:
check:
stage: ci
target: linux
script:
- echo "running on linux"
Additional details:
targetis required for non-template jobs, unlessdefault.targetis set.- If you're on macOS or Windows, use
loom run --localinside a Linux environment (VM, container, or CI runner) since the local executor targets Linux. - Any value other than
linuxproduces:set target to "linux" (MVP currently supports linux only). - Additional target platforms are planned for future releases.
script
Use script to specify the shell commands the runner executes for a job.
Keyword type: Job keyword. You can use it only as part of a job or in a template
that provides it via extends.
Supported values:
- A non-empty YAML sequence of non-empty strings.
- Each entry is a single shell command — embedded newlines are not allowed.
Example of script:
check:
stage: ci
target: linux
script:
- pnpm install --frozen-lockfile
- pnpm nx run-many -t lint,test
- echo "all checks passed"
Additional details:
scriptis required for non-template jobs. Omitting it produces:add required key script as a non-empty string sequence.- Each command must be a scalar string. YAML block scalars (multiline
|or>) that produce embedded newlines in a single entry are rejected:use one command per script entry; remove embedded newlines. - Empty strings and whitespace-only strings are rejected:
replace empty script command with a non-empty string. - Commands execute sequentially. If any command exits with a non-zero code, the job fails and remaining commands do not run.
before_scriptandafter_scriptsections are planned but not yet supported.
extends
Use extends to reuse configuration from a template job. The template's keys are
merged into the inheriting job, with the inheriting job's values taking precedence.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
- A single string naming a template job. The template name must start with
.and must exist in the workflow (either directly or viainclude).
Example of extends:
.node-base:
image: node:20
variables:
NODE_ENV: production
lint:
extends: .node-base
stage: ci
target: linux
script:
- npm run lint
test:
extends: .node-base
stage: ci
target: linux
script:
- npm test
In this example, both lint and test inherit image: node:20 and the NODE_ENV
variable from .node-base. Each job defines its own script.
Additional details:
- Merge behavior:
- Scalars and sequences (strings, numbers, lists): the child value replaces
the parent value entirely. For example, if the parent has
script: ["echo base"]and the child hasscript: ["echo child"], the result is["echo child"]. - Mappings (like
variables,cachein mapping form): keys merge recursively. The parent provides defaults; the child overrides specific keys.
- Scalars and sequences (strings, numbers, lists): the child value replaces
the parent value entirely. For example, if the parent has
- Only single inheritance is supported.
extendsmust be a single string, not a list. Multi-template composition is planned. - Cycles are detected and rejected. If
.aextends.band.bextends.a, the validator reports an error. - Templates can extend other templates (chaining). The chain is resolved recursively.
- To verify the resolved configuration after template merging, run:
loom compile --workflow .loom/workflow.yml
Related topics:
- Includes and templates for patterns, merge semantics, and debugging tips.
needs
Use needs to declare explicit dependencies between jobs. With needs, a job can
start as soon as the jobs it depends on complete, without waiting for the entire
previous stage to finish. This enables DAG-style (directed acyclic graph) execution.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
- A YAML sequence of job name strings.
Example of needs:
stages:
- deps
- ci
install:
stage: deps
target: linux
script:
- pnpm install
lint:
stage: ci
target: linux
needs:
- install
script:
- pnpm run lint
test:
stage: ci
target: linux
needs:
- install
script:
- pnpm test
In this example, both lint and test declare a dependency on install. They start
as soon as install completes, potentially running in parallel with each other.
Additional details:
- The schema accepts
needsas a valid job key. DAG scheduling semantics are evolving — today, stage ordering is the primary execution model. Do not rely onneedsto change execution order until DAG scheduling is fully implemented. - Validation of
needsvalues (checking that referenced job names exist) is planned. - Referenced jobs should be in the same or earlier stages.
image
Use image to run a job inside a Docker container. When image is set, Loom uses
the Docker provider instead of the Host provider, executing the job's script
commands inside the specified container image.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
- A non-empty string specifying the Docker image name, including optional registry
path and tag:
<image-name>(useslatesttag)<image-name>:<tag>
- A mapping that names the image and optionally describes how to build it locally before execution.
The following subkeys are available in the image mapping:
| Subkey | Description |
|---|---|
image:name | Required in mapping form. Docker image reference to run for the job. |
image:build | Optional local build configuration to produce the image before job execution. |
Example of scalar image:
stages:
- ci
check:
stage: ci
target: linux
image: node:20-alpine
script:
- node --version
- npm test
This example runs the check job inside the node:20-alpine container.
Example without image (Host provider):
check:
stage: ci
target: linux
script:
- echo "runs directly on the host"
Mapping form:
check:
stage: ci
target: linux
image:
name: loom/nix-local
build:
context: .
dockerfile: Dockerfile.ci
output: type=docker,dest=loom.tar
script:
- echo "building and running in loom/nix-local"
name: non-empty Docker image reference (same semantics as the scalar form).build(optional): describes a local build to run before the job.contextanddockerfilemust be non-empty strings. They are interpreted relative to the workflow file and mirrordocker build's context path and Dockerfile flag.outputis optional; when present it is passed through todocker build --output(e.g.type=docker,dest=/tmp/my-image.tar). The validator enforces the mapping shape, soloom checkaccepts this form today even if the backend does not build it yet.
Additional details:
- When
imageis set, the Docker provider mounts the workspace into the container and executes commands there. Whenimageis not set, the Host provider runs commands directly on the machine. - The image must be pullable by the Docker daemon on the machine running
loom run. For local images (not on a registry), build them first — see the Docker provider for details. - Image pull policy, authentication, and registry configuration are planned.
- An empty string for
imageindefaultproduces:set default image to a non-empty string. image.buildis supported in the Docker provider. The image is built locally before the job container starts.- The mapping shape is executor-neutral; future remote or Kubernetes executors can
adopt the same
name/buildspec when they start building container images.
Related topics:
- Docker provider for runtime behavior, workspace mounting, and troubleshooting.
- Host provider for behavior when
imageis not set.
image:name
Use image:name to specify the Docker image reference when image is written as a
mapping.
Keyword type: Job keyword (image subkey).
Supported values:
- A non-empty string using the same image reference rules as scalar
image.
Example of image:name:
image:
name: ghcr.io/acme/build:latest
Additional details:
nameis required whenimageuses the mapping form. Omitting it produces:add required key name for image mapping.- Use the scalar form when you only need an image reference and do not need
image:build.
image:build
Use image:build to describe a local image build that runs before the job container
starts.
Keyword type: Job keyword (image subkey).
Supported values:
- A mapping with the following subkeys:
| Subkey | Description |
|---|---|
image:build:context | Required. Build context path passed to the Docker build. |
image:build:dockerfile | Required. Dockerfile path to use for the build. |
image:build:output | Optional build output configuration forwarded to the backend. |
Example of image:build:
image:
name: ghcr.io/acme/build:latest
build:
context: .
dockerfile: Dockerfile.ci
output:
type: registry
name: ghcr.io/acme/build:latest
Additional details:
buildmust be a mapping. Non-mapping values produce:set image build to a mapping with required keys context and dockerfile.- Unknown keys produce:
remove unknown image build key; allowed keys are context, dockerfile, output.
image:build:context
Use image:build:context to set the build context path for the local image build.
Keyword type: Job keyword (image build subkey).
Supported values:
- A non-empty string.
Example of image:build:context:
image:
name: loom/nix-local
build:
context: .
dockerfile: Dockerfile.ci
Additional details:
contextis required whenimage:buildis present. Omitting it produces:add required key context for image build.- The path is interpreted relative to the workflow file location.
image:build:dockerfile
Use image:build:dockerfile to choose which Dockerfile the local image build should
use.
Keyword type: Job keyword (image build subkey).
Supported values:
- A non-empty string.
Example of image:build:dockerfile:
image:
name: loom/nix-local
build:
context: .
dockerfile: Dockerfile.ci
Additional details:
dockerfileis required whenimage:buildis present. Omitting it produces:add required key dockerfile for image build.- The path is interpreted relative to the workflow file location.
image:build:output
Use image:build:output to pass build output configuration through to the image build
backend.
Keyword type: Job keyword (image build subkey).
Supported values:
- A non-empty string, or
- A YAML mapping.
Example of image:build:output:
image:
name: loom/nix-local
build:
context: .
dockerfile: Dockerfile.ci
output: type=docker,dest=loom.tar
Mapping form:
image:
name: ghcr.io/acme/build:latest
build:
context: .
dockerfile: Dockerfile.ci
output:
type: registry
name: ghcr.io/acme/build:latest
Additional details:
- Invalid values produce:
set image build output to either a non-empty string or mapping. - Loom validates only the top-level
outputshape. If you use the mapping form, nested output keys are backend-defined and are passed through as-is.
runner_pool
Use runner_pool to select which runner pool should execute a job. Runner pools
enable routing jobs to different execution environments with different capabilities
or resource constraints.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
- A non-empty string identifying the runner pool.
Example of runner_pool:
check:
stage: ci
target: linux
runner_pool: default
script:
- echo "running on default pool"
Additional details:
- The schema accepts
runner_poolas a valid job key. Runtime routing to named pools and pool capability/constraint enforcement are planned for remote execution support. - An empty string for
runner_poolindefaultproduces:set default runner_pool to a non-empty string.
Job variables
Use job-level variables to define or override variables for a specific job. Job
variables take precedence over workflow-level variables and default.variables for
that job.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values: A YAML mapping of variable name/value pairs:
- Variable names must match
^[A-Z_][A-Z0-9_]*$. - Values must be strings.
Example of job variables:
version: v1
stages: [ci]
default:
target: linux
variables:
MODE: default
print-default:
stage: ci
script:
- printf '%s\n' "$MODE"
print-override:
stage: ci
variables:
MODE: job-override
script:
- printf '%s\n' "$MODE"
In this example:
print-defaultinheritsMODE=defaultfromdefault.variables.print-overrideoverridesMODEtojob-overridevia its ownvariables.
Additional details:
- Job variables and default variables merge — the job provides overrides, the
default provides fallbacks. If both the default and the job define
MODE, the job value wins. If only the default definesFOO, the job inheritsFOO. - Variable names that don't match the pattern produce:
rename variable key to match ^[A-Z_][A-Z0-9_]*$. - Non-string values produce:
set variable value to a string.
Related topics:
variables(global keyword) for workflow-level defaults.- Variables for precedence rules and troubleshooting.
- Concepts → Variables for the conceptual model.
secrets
Use secrets to declare sensitive values (passwords, tokens, private keys) that are
resolved at runtime by a secrets provider. Unlike variables, secrets store references
only — the actual value is never written to the workflow file.
Keyword type: Job keyword. You can use it only as part of a job. Secrets are
not allowed in the default section — they must be declared per job.
Supported values: A YAML mapping of secret name/spec pairs:
- Secret names must match
^[A-Z_][A-Z0-9_]*$(same pattern as variable names). - Each secret spec is a mapping with the following subkeys:
| Subkey | Description |
|---|---|
secrets:ref | Required. Non-empty reference string identifying the secret in the provider. |
secrets:file | Inject via file path (true, default) or environment variable (false). |
secrets:required | Fail the job if the secret cannot be resolved (true, default). |
Example of secrets:
check:
stage: ci
target: linux
secrets:
DB_PASSWORD:
ref: op://vault/db/password
API_TOKEN:
ref: op://vault/api/token
file: false
required: true
script:
- echo "secrets available"
Additional details:
- A job cannot define the same key in both
variablesandsecrets. Collisions produce:remove key collision with variables; a key cannot exist in both variables and secrets. - When
fileistrue(default), the secret value is written to a temporary file and the environment variable is set to the file path. Whenfileisfalse, the secret value is injected directly as an environment variable. - When
requiredistrue(default), a missing or unresolvable secret fails the job. Setrequired: falsefor optional secrets that may not exist in all environments. - Secrets are automatically redacted in all runtime output (stdout, stderr, structured events).
secretsis not allowed underdefault. Attempting to setdefault.secretsproduces:remove default.secrets; secrets are job-scoped and must be declared per job.
Related topics:
- Secrets for provider configuration and usage patterns.
- Concepts → Secrets for the conceptual model.
secrets:ref
Use secrets:ref to identify which provider-backed secret Loom should resolve for a
given environment variable name.
Keyword type: Job keyword (secret subkey).
Supported values:
- A non-empty string secret reference.
Example of secrets:ref:
secrets:
API_TOKEN:
ref: op://vault/api/token
Additional details:
refis required for every secret entry. Omitting it produces:add required key ref with a non-empty string secret reference.- An empty or non-string value produces:
set ref to a non-empty string secret reference.
secrets:file
Use secrets:file to choose whether Loom injects the secret as a temporary file path
or directly as an environment variable value.
Keyword type: Job keyword (secret subkey).
Supported values:
trueorfalse.
Example of secrets:file:
secrets:
API_TOKEN:
ref: op://vault/api/token
file: false
Additional details:
filedefaults totrue.- When
file: true, Loom writes the secret value to a temporary file and sets the environment variable to that file path. - When
file: false, Loom injects the secret value directly into the environment variable.
secrets:required
Use secrets:required to control whether a missing or unresolvable secret should fail
the job.
Keyword type: Job keyword (secret subkey).
Supported values:
trueorfalse.
Example of secrets:required:
secrets:
OPTIONAL_LICENSE:
ref: env://OPTIONAL_LICENSE
required: false
Additional details:
requireddefaults totrue.- When
required: false, Loom allows the job to proceed even if the secret cannot be resolved.
services
Use services to run sidecar containers alongside a job. Services are started before
the job's script executes and are torn down after the job finishes. Common uses
include databases, caches, and other network-accessible dependencies that the job
needs during execution.
Keyword type: Job keyword. You can use it as part of a job or in the
default section.
Supported values:
- A YAML sequence of service entries. Each entry is either:
- A non-empty image string (shorthand — the image name is used as the service name).
- A mapping with service subkeys (see below).
The following subkeys are available in a service mapping:
| Subkey | Description |
|---|---|
services:name | Required. Docker image to run as the service container. |
services:alias | Network alias(es) for the service on the job network. |
services:entrypoint | Override the container entrypoint. |
services:command | Override the container command. |
services:variables | Environment variables passed into the service container. |
Example of services (shorthand image strings):
test:
stage: ci
target: linux
image: node:20-alpine
services:
- postgres:16
- redis:7
script:
- npm test
Example of services (mapping form):
integration:
stage: ci
target: linux
image: node:20-alpine
services:
- name: postgres:16
alias: db
variables:
POSTGRES_DB: testdb
POSTGRES_USER: runner
POSTGRES_PASSWORD: secret
- name: redis:7
alias: cache
script:
- npm run test:integration
Additional details:
servicesis optional. If omitted, the job runs without sidecar containers.- Services are only meaningful for Docker jobs (those with
imageset). The Docker SDK backend creates a shared network, starts each service container on it, waits for a brief grace period, then runs the main job container on the same network. The job can reach each service by its image name or by any configuredservices:alias. - Default and job precedence: if
default.servicesis set and a job also definesservices, the job's list replaces the default entirely (no merge). If only the default definesservices, every job inherits them. To opt a job out of default services, setservices: []on that job. - The same replacement (no-merge) behavior applies to
extends: if a child job definesservices, the parent's services are replaced, not merged. - Runtime support: services execution is supported in Docker jobs (those with
image:set). See Docker provider for details.
Unsupported / deferred subkeys:
The following subkeys are recognized by the validator but not yet supported. Using them produces a schema error explaining they are deferred:
| Subkey | Validator error message |
|---|---|
docker | service docker is not supported yet |
kubernetes | service kubernetes is not supported yet |
pull_policy | service pull_policy is not supported yet |
Related topics:
- Docker provider for sidecar lifecycle details and the SDK backend.
services:name
Use services:name to specify the Docker image to run as a service container.
When using the mapping form, name is the image reference (e.g. postgres:16).
Keyword type: Job keyword (service subkey).
Supported values:
- A non-empty string specifying the Docker image.
Example of services:name:
services:
- name: postgres:16
Additional details:
nameis required when the service entry is a mapping. Omitting it produces:add required key name for each service definition.- When the shorthand (scalar) form is used, the image string is treated as the
name.
services:alias
Use services:alias to assign one or more network aliases to a service container.
Aliases let the job container reach the service using a friendly hostname instead of
the image name.
Keyword type: Job keyword (service subkey).
Supported values:
- A non-empty string. Multiple aliases can be specified as a comma-separated or
space-separated list within the string (e.g.
"db, database"). Duplicates are deduplicated automatically.
Example of services:alias:
services:
- name: postgres:16
alias: db
Additional details:
- If
aliasis omitted, the service is reachable by its image name on the job network.
services:entrypoint
Use services:entrypoint to override the default entrypoint of the service
container image.
Keyword type: Job keyword (service subkey).
Supported values:
- A non-empty YAML sequence of non-empty strings.
Example of services:entrypoint:
services:
- name: postgres:16
entrypoint:
- docker-entrypoint.sh
- postgres
services:command
Use services:command to override the default command of the service container
image.
Keyword type: Job keyword (service subkey).
Supported values:
- A non-empty YAML sequence of non-empty strings.
Example of services:command:
services:
- name: postgres:16
command:
- "-c"
- "max_connections=200"
services:variables
Use services:variables to pass environment variables into a service container.
Keyword type: Job keyword (service subkey).
Supported values: A YAML mapping of variable name/value pairs following the same
rules as job variables.
Example of services:variables:
services:
- name: postgres:16
variables:
POSTGRES_DB: testdb
POSTGRES_USER: runner
POSTGRES_PASSWORD: secret
invariant
Use invariant to attach policy configuration to a job. Invariants enable
job-level policy checkpoints where decisions can be captured in receipts and
runtime events.
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
- A YAML mapping. Use
{}for an empty invariant.
Example of invariant:
check:
stage: ci
target: linux
invariant: {}
script:
- echo "policy-gated"
Additional details:
- The schema accepts
invariantas a valid job key. Policy checkpoint semantics and decision capture in receipts/events are planned. - In
default,invariantmust be a mapping. Non-mapping values produce:set default invariant to a mapping (use {} for empty).
cache
Use cache to specify files and directories to cache between workflow runs.
Caching expensive-to-recompute directories (package manager stores, build caches)
can significantly speed up subsequent runs.
Cache can be specified in two forms:
- Mapping form (single cache): a single cache configuration.
- Sequence form (multiple caches): a list of named cache configurations, each operating independently.
To explicitly disable cache for a job (overriding a default), set cache to
null or [].
Keyword type: Job keyword. You can use it only as part of a job or in the
default section.
Supported values:
- A mapping (single cache), or
- A YAML sequence of cache mappings (multiple caches), or
nullor[]to explicitly disable caching for a job.
The following subkeys are available in a cache mapping:
| Subkey | Description |
|---|---|
cache:paths | Required. Directories/files to cache. |
cache:key | Cache key (string or structured mapping). |
cache:fallback_keys | Fallback keys if primary key misses. |
cache:policy | When to restore/save (pull, push, pull-push). |
cache:when | Save cache based on job status. |
cache:name | Required name when using sequence form. |
cache:disabled | Disable a named cache entry. |
Example of cache (single cache, mapping form):
check:
stage: ci
target: linux
cache:
key:
prefix: loom-cache
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
- .nx/cache
policy: pull-push
when: always
script:
- pnpm install --frozen-lockfile
- pnpm nx run-many -t check
Example of cache (multiple caches, sequence form):
check:
stage: ci
target: linux
cache:
- name: pnpm
key:
prefix: loom-cache-pnpm
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
policy: pull-push
when: always
- name: go
key:
prefix: loom-cache-go
files:
- go.work
- "**/go.sum"
paths:
- .go
policy: pull-push
when: always
script:
- pnpm install --frozen-lockfile
Example of cache (disable cache for a specific job):
default:
cache:
paths: [.pnpm-store]
policy: pull-push
when: always
build-image:
stage: deps
target: linux
cache: []
script:
- docker build -t myapp .
In this example, build-image explicitly disables cache with cache: [], overriding
the default cache configuration.
Additional details:
- When
cacheis a sequence, each entry must include a uniquename. Duplicate names are rejected. - Unknown keys produce:
remove unknown cache key; allowed keys are name, disabled, paths, key, fallback_keys, policy, when. - At runtime, cache behavior is recorded in runtime logs as system sections
(
cache_restoreandcache_save). Follow the Diagnostics ladder to find cache events.
Related topics:
- Cache for key design, template variables, and operational constraints.
- Concepts → Cache for the conceptual model.
- Cache provider for runtime behavior.
- KB → Caching strategies for patterns.
cache:paths
Use cache:paths to specify which directories or files to cache.
Keyword type: Job keyword (cache subkey).
Supported values:
- A non-empty YAML sequence of non-empty strings. Each string is a path relative to the project directory.
Example of cache:paths:
cache:
paths:
- .pnpm-store
- .nx/cache
- node_modules
Additional details:
pathsis required when cache is a mapping (unlessdisabled: trueis set in sequence form). Omitting it produces:add required key paths with at least one entry.- Paths are relative to the project workspace root.
- For Docker jobs (
imageset), paths refer to directories inside the container's workspace mount. Mismatches between host and container paths are a common source of "cache did nothing" issues.
cache:key
Use cache:key to give each cache a unique identifying key. All runs that produce
the same cache key share the same cached data.
Keyword type: Job keyword (cache subkey).
Supported values:
- A non-empty string (supports template variable placeholders), or
- A mapping with
prefixandfilessubkeys.
Example of cache:key (string form):
cache:
key: "pnpm-${job_name}-${head_sha}"
paths:
- .pnpm-store
Example of cache:key (mapping form):
cache:
key:
prefix: loom-cache
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
Additional details:
- If
keyis omitted, the runtime uses a default key computation. - An empty string produces:
replace empty cache key string with a non-empty value. - When using the mapping form, the
filessubkey is required.
Template variable placeholders (expanded at runtime when key is a string or
when used in prefix):
| Placeholder | Description |
|---|---|
${job_name} | Job name (graph node id). |
${job_id} | Job id (same as job name today). |
${run_id} | Executor run id. |
${pipeline_id} | Executor pipeline id. |
${head_sha} | Snapshot HEAD commit SHA. |
cache:key:prefix
Use cache:key:prefix to add a prefix to the cache key computed from
cache:key:files. This lets you namespace cache keys for different
purposes while still keying off file content.
Keyword type: Job keyword (cache subkey).
Supported values:
- A non-empty string. Supports template variable placeholders.
Example of cache:key:prefix:
cache:
key:
prefix: loom-cache-pnpm
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
Additional details:
prefixis optional. If omitted, the key is computed fromfilesalone.- An empty string produces:
replace empty cache key prefix string with a non-empty value.
cache:key:files
Use cache:key:files to compute a cache key based on the content of specific files.
When any of these files change, a new cache key is generated and a new cache is created.
Keyword type: Job keyword (cache subkey).
Supported values:
- A non-empty YAML sequence of non-empty strings. Each entry can be:
- A literal file path (e.g.
pnpm-lock.yaml) - A glob pattern using
*,?, or**(doublestar recursive semantics)
- A literal file path (e.g.
Example of cache:key:files:
cache:
key:
prefix: loom-cache
files:
- pnpm-lock.yaml
- package.json
paths:
- .pnpm-store
Example with glob patterns:
cache:
key:
prefix: loom-cache-go
files:
- go.work
- go.work.sum
- "**/go.sum"
paths:
- .go
Additional details:
filesis required whencache:keyis a mapping. Omitting it produces:add required key files with at least one entry to cache key mapping.- Glob support (
*,?,**) is a Loom extension — many other CI systems do not support glob patterns in cache key files. - Entries like
"**/go.sum"matchgo.sumfiles in any subdirectory.
cache:fallback_keys
Use cache:fallback_keys to specify alternative keys to try if the primary
cache:key doesn't find a cached archive. Keys are tried in order.
Keyword type: Job keyword (cache subkey).
Supported values:
- A non-empty YAML sequence of non-empty strings. Supports template variable placeholders.
Example of cache:fallback_keys:
cache:
key: "pnpm-${head_sha}"
fallback_keys:
- "pnpm-main"
- "pnpm-default"
paths:
- .pnpm-store
In this example, if no cache exists for the current commit SHA, the runtime tries
pnpm-main, then pnpm-default.
cache:policy
Use cache:policy to control when the cache is restored and saved.
Keyword type: Job keyword (cache subkey).
Supported values:
pull: Only restore the cache at job start. Never save after the job finishes. Use when many parallel jobs share the same cache and you want to avoid redundant saves.push: Only save the cache after the job finishes. Never restore at job start. Use for jobs that build/populate the cache.pull-push(default behavior): Restore the cache at job start and save it after the job finishes.
Example of cache:policy:
install:
stage: deps
target: linux
cache:
paths: [.pnpm-store]
policy: pull-push
script:
- pnpm install
lint:
stage: ci
target: linux
cache:
paths: [.pnpm-store]
policy: pull
script:
- pnpm run lint
In this example, install both restores and saves the cache (pull-push). lint
only restores it (pull), avoiding a redundant save since the cache contents didn't
change.
Additional details:
- Invalid values produce:
set cache policy to one of: pull, push, pull-push.
cache:when
Use cache:when to control when the cache is saved, based on the job's exit status.
Keyword type: Job keyword (cache subkey).
Supported values:
on_success(default): Save the cache only when the job succeeds.on_failure: Save the cache only when the job fails.always: Save the cache regardless of the job's exit status.
Example of cache:when:
check:
stage: ci
target: linux
cache:
paths: [.pnpm-store]
when: always
script:
- pnpm install
- pnpm test
In this example, the cache is saved whether the tests pass or fail, so subsequent runs benefit from the installed dependencies regardless.
Additional details:
- Invalid values produce:
set cache when to one of: on_success, on_failure, always.
cache:name
Use cache:name to give a unique name to a cache entry when using the sequence
(multi-cache) form. The name identifies the cache across the workflow and allows
different jobs to reference the same named cache.
Keyword type: Job keyword (cache subkey, sequence form only).
Supported values:
- A non-empty string.
Example of cache:name:
default:
cache:
- name: pnpm
key:
prefix: loom-cache
files: [pnpm-lock.yaml]
paths: [.pnpm-store]
policy: pull-push
when: always
- name: go
key:
prefix: loom-cache
files: [go.work, "**/go.sum"]
paths: [.go]
policy: pull-push
when: always
Additional details:
nameis required whencacheis a sequence. Omitting it produces:add required key name with a non-empty string value when cache is a sequence.- Names must be unique within a cache sequence. Duplicate names produce:
use unique cache names; "xyz" is duplicated. - When a job overrides a specific named cache from the default, it can reference just the caches it wants to change:
lint:
stage: ci
target: linux
cache:
- name: pnpm
policy: pull
- name: go
policy: pull
script:
- pnpm run lint
cache:disabled
Use cache:disabled in a sequence (multi-cache) entry to disable that specific named
cache for a job, without removing it from the configuration.
Keyword type: Job keyword (cache subkey, sequence form only).
Supported values:
trueorfalse.
Example of cache:disabled:
check:
stage: ci
target: linux
cache:
- name: pnpm
disabled: true
- name: go
paths: [.go]
script:
- go test ./...
In this example, the pnpm cache is disabled for this job, but the go cache is
active.
Additional details:
- When
disabled: trueis set, thepathskey is not required for that entry.
artifacts
Use artifacts to extract files from the job workspace after execution. Extracted
files are copied to the run's structured log directory at
.loom/.runtime/logs/<run_id>/jobs/<job_id>/artifacts/, preserving their relative
directory structure within the workspace.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A YAML mapping with the following subkeys:
| Subkey | Description |
|---|---|
artifacts:paths | Required. Glob patterns matching files or directories to extract. |
artifacts:exclude | Glob patterns for files to exclude from extraction. |
artifacts:name | Human-readable name for the artifact set. |
artifacts:when | When to extract: on_success (default), on_failure, or always. |
Example of artifacts:
build:
stage: ci
target: linux
image: node:20-alpine
script:
- npm run build
- npm test -- --coverage
artifacts:
paths:
- dist/
- coverage/
exclude:
- dist/**/*.map
name: build-output
when: on_success
Example with when: on_failure (capture test reports on failure):
test:
stage: ci
target: linux
image: node:20-alpine
script:
- npm test -- --reporter=junit --outputFile=reports/junit.xml
artifacts:
paths:
- reports/
when: on_failure
Additional details:
artifactsis optional. If omitted, no files are extracted after the job completes.pathsis required whenartifactsis present. Omitting it produces:add required key paths for artifacts.whencontrols extraction timing based on job outcome:on_success(default): extract only when the job succeeds.on_failure: extract only when the job fails.always: extract regardless of job outcome.
- Invalid
whenvalues produce:set artifacts when to one of: on_success, on_failure, always. - Unknown keys produce:
remove unknown artifacts key; allowed keys are paths, exclude, name, when. - Artifact extraction runs as a system section (
artifact_extract) and its events appear injobs/<job_id>/system/artifact_extract/events.jsonl. - When at least one file is extracted, an archive is also produced at
jobs/<job_id>/artifacts/artifacts.tar.gz. The job manifest and pipeline manifest include metadata fields pointing to this archive (see Runtime logs contract). - Paths are relative to the job workspace root. For Docker jobs, paths reference
locations inside
/workspace.
Related topics:
- Runtime logs contract for the artifact directory location.
- Docker provider for how artifacts interact with container workspaces.
artifacts:paths
Use artifacts:paths to declare which files or directories Loom should extract from
the job workspace after execution.
Keyword type: Job keyword (artifacts subkey).
Supported values:
- A non-empty YAML sequence of non-empty strings.
Example of artifacts:paths:
artifacts:
paths:
- dist/
- coverage/
Additional details:
pathsis required whenartifactsis present. Omitting it produces:add required key paths for artifacts.- Paths are matched relative to the job workspace root.
artifacts:exclude
Use artifacts:exclude to omit matching files from the extracted artifact set.
Keyword type: Job keyword (artifacts subkey).
Supported values:
- A YAML sequence of non-empty strings.
Example of artifacts:exclude:
artifacts:
paths:
- dist/
exclude:
- dist/**/*.map
Additional details:
excludeis optional.- Empty entries produce:
replace empty artifact exclude entry with a non-empty string.
artifacts:name
Use artifacts:name to attach a human-readable label to the artifact set.
Keyword type: Job keyword (artifacts subkey).
Supported values:
- A non-empty string.
Example of artifacts:name:
artifacts:
paths:
- dist/
name: build-output
Additional details:
nameis optional.
artifacts:when
Use artifacts:when to control when Loom extracts artifacts based on the job result.
Keyword type: Job keyword (artifacts subkey).
Supported values:
on_success(default)on_failurealways
Example of artifacts:when:
artifacts:
paths:
- reports/
when: on_failure
Additional details:
- Invalid values produce:
set artifacts when to one of: on_success, on_failure, always.
Common validator failures (and quick fixes)
When you run loom check and encounter errors, use this section as a quick
reference. Errors follow the format WF_SCHEMA_V1 /path: message.
| Symptom | Fix |
|---|---|
Missing version | Add version: v1 at the root of your workflow. |
Missing stages | Add stages: with at least one stage name. |
| Missing required job keys | Non-template jobs need stage, target, and script. Add the missing key, or set default.target so jobs can inherit it. |
| Job references an undeclared stage | Ensure the job's stage: value matches one of the names in stages:. |
| Invalid stage name | Rename to match ^[a-z][a-z0-9_-]{0,31}$ — lowercase, starts with a letter, max 32 characters. |
| Invalid job name | Rename to match ^\.?[a-z][a-z0-9_-]{0,63}$ — lowercase, starts with a letter (or . for templates), max 64 characters. |
| Invalid variable name | Rename to match ^[A-Z_][A-Z0-9_]*$ — uppercase with underscores. |
script shape errors | script: must be a non-empty list of non-empty strings. Each entry must be a single-line command (no embedded newlines). |
Invalid include.local path | Path must start with .loom/templates/, end with .yml/.yaml, and must not contain ... |
| Unknown job key | Remove the unrecognized key. Allowed job keys: stage, target, script, extends, needs, image, runner_pool, variables, secrets, invariant, cache, services, artifacts. |
| Unknown default key | Remove the unrecognized key. Allowed default keys: target, image, runner_pool, variables, invariant, cache, services. |
secrets in default | Secrets are job-scoped and cannot be set in default. Move each secret declaration into the jobs that need it. |
| Variable/secret key collision | A key cannot exist in both variables and secrets within the same job. Rename or remove the duplicate. |
| YAML anchors/aliases detected | Remove &anchor and *alias syntax. Loom enforces deterministic YAML — use include and extends instead. |
Minimal example
The smallest workflow that passes loom check:
version: v1
stages: [ci]
check:
stage: ci
target: linux
script:
- echo "hello"
Full-featured example
A real-world workflow using most v1 features:
version: v1
stages:
- deps
- ci
default:
target: linux
cache:
- name: pnpm
key:
prefix: loom-cache
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
policy: pull-push
when: always
- name: go
key:
prefix: loom-cache
files:
- go.work
- go.work.sum
- apps/**/go.sum
- libs/**/go.sum
paths:
- .go
policy: pull-push
when: always
variables:
PNPM_STORE_DIR: .pnpm-store
GOPATH: .go
build-nix-image:
stage: deps
target: linux
cache: []
script:
- docker build -t loom:nix-local -f nix/Dockerfile .
install-deps:
stage: ci
image: loom:nix-local
variables:
PNPM_STORE_DIR: .pnpm-store
script:
- pnpm i --frozen-lockfile
- pnpm nx run-many -t go-deps-verify
check:
stage: ci
image: loom:nix-local
needs:
- install-deps
cache:
- name: pnpm
policy: pull
- name: go
policy: pull
script:
- pnpm i --frozen-lockfile
- task check
Next steps
Pick the path that matches what you're trying to do:
- Write your first workflow: Hello Loom
- Set or override variables: Variables and Concepts → Variables
- Use secrets: Secrets and Concepts → Secrets
- Add caching: Cache and Concepts → Cache
- Use templates and includes: Includes and templates
- Validate your workflow:
loom checkand Workflow schema v1 - Diagnose a failing run: Diagnostics ladder and Runtime logs contract