Cache provider
The Cache provider wraps the Host or Docker provider with automatic cache restore and save steps. It speeds up repeated runs by persisting directories (like dependency stores or build caches) between executions.
When Loom uses cache
Cache wrapping activates when the resolved job definition includes cache: — either directly on the job or inherited via default or extends templates.
Provider routing still follows the base rule:
| Condition | Result |
|---|---|
image: present + cache: present | Docker provider wrapped with cache |
image: absent + cache: present | Host provider wrapped with cache |
cache: absent | No cache wrapping — base provider runs alone |
How it works
For each cache entry on a job, the Cache provider:
- Resolves the cache key — computes a key from the
prefixand file hashes (or atemplateexpression). - Restores (if policy allows) — looks up the key in the local cache store and extracts the archived paths into the workspace.
- Runs the job — delegates to the inner provider (Host or Docker).
- Saves (if policy and
whenconditions are met) — archives the declared paths from the workspace into the cache store.
Cache archives use zstd-compressed tar format for fast compression and small footprint.
Configuration example
deps:
stage: ci
target: linux
image: alpine:3.20
cache:
key:
prefix: loom-cache
files: [pnpm-lock.yaml]
paths:
- .pnpm-store
- .nx/cache
policy: pull-push
when: always
script:
- pnpm i --frozen-lockfile
For the full cache schema, see Syntax (v1) → cache.
Cache key resolution
Keys determine cache identity. Two key modes are available:
| Mode | Config | Resulting key |
|---|---|---|
| Prefix + file hash | key.prefix + key.files | <prefix>-<sha256 of file contents> |
| Template | key.template | Expanded template string (supports $job_name, $job_id, $run_id, $pipeline_id, $head_sha) |
| Prefix only | key.prefix (no files) | The prefix string itself |
When key.files includes glob patterns (e.g. **/*.lock), all matching files are hashed in sorted order. Missing files contribute a MISSING:<pattern> marker to the hash rather than causing an error.
Fallback keys
If the primary key misses, the cache provider tries each fallback_keys entry in order. The first hit wins. This is useful for partial cache reuse across branches or configurations.
Policy and when
Two fields control whether restore and save happen:
| Field | Values | Default | Effect |
|---|---|---|---|
policy | pull-push, pull, push | pull-push | pull = restore only, push = save only, pull-push = both |
when | on_success, on_failure, always | on_success | Gates save based on job outcome |
Decision matrix for save:
| Policy | When | Job succeeded | Job failed |
|---|---|---|---|
pull-push | on_success | Save | Skip |
pull-push | on_failure | Skip | Save |
pull-push | always | Save | Save |
push | on_success | Save | Skip |
pull | (any) | Skip | Skip |
Restore always happens before the job runs (when policy includes pull), regardless of when.
Quarantine
When --cache-diff detects mechanical divergence between cached and fresh output, the offending cache key is quarantined. A quarantined key is skipped on both restore and save until the quarantine is cleared.
Quarantine entries are stored per scope and key hash with a reason (default: cache_diff_divergence) and timestamp.
Paths and permissions
Always use workspace-relative paths
cache.paths entries like .pnpm-store and .nx/cache are relative to the workspace root — never absolute paths.
Host vs Docker path meaning
| Provider | Where paths resolve |
|---|---|
| Host | Real directories in your local workspace checkout |
| Docker | Directories inside the container's /workspace mount |
If the Docker container user cannot read/write the cached directories, restore or save may silently fail.
Ensure tools use the cached path
The cached directory must actually be used by the tool. For example, when caching pnpm's store:
variables:
PNPM_STORE_DIR: .pnpm-store
cache:
paths: [.pnpm-store]
Without PNPM_STORE_DIR, pnpm writes to its default location and the cached directory stays empty.
Inheriting cache via defaults and templates
Cache configuration can be inherited. Use loom compile to inspect what a job effectively receives.
Via default.cache
version: v1
stages: [ci]
default:
target: linux
cache:
paths: [.pnpm-store, .nx/cache]
policy: pull-push
when: always
check:
stage: ci
script:
- pnpm nx run loom-platform:check
Via templates (extends)
.base:
cache:
paths: [.nx/cache]
policy: pull-push
when: always
test:
extends: .base
stage: ci
target: linux
script:
- pnpm nx test
To verify inherited cache, run:
loom compile --workflow .loom/workflow.yml
Cache in runtime logs
Cache activity appears as structured system sections in the job's runtime logs:
| Section | Path pattern |
|---|---|
| Restore | jobs/<job_id>/system/cache_restore/events.jsonl |
| Save | jobs/<job_id>/system/cache_save/events.jsonl |
The job manifest (jobs/<job_id>/manifest.json) includes pointers to these sections, so you can jump directly to cache events without scanning broad logs.
Log messages include structured fields: cache_name, key, and status (hit, miss, skipped quarantined, error).
For the full log structure, see the Runtime logs contract.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
No cache_restore / cache_save in logs | cache: not in resolved job | Run loom compile and verify the job has an effective cache: section |
| Cache restore always misses | Key changes every run | Check that key.files points to stable inputs (e.g. lockfiles, not generated files) |
| Cache skipped repeatedly | Key quarantined by --cache-diff | Review quarantine entries; clear quarantine after fixing divergence |
| Cache path not restored | Path mismatch or permissions | Verify paths are workspace-relative and writable by the runtime user |
| Cache save skipped on success | policy: pull | Change to pull-push or push to enable saving |
| Cache appears empty | Tool writes elsewhere | Set the tool's store directory to match cache.paths (e.g. PNPM_STORE_DIR) |
Limitations
- Cache is local-execution focused in the current MVP. No remote/shared cache store yet.
- Cache effectiveness depends on key quality — unstable inputs cause frequent misses.
--cache-diffcan intentionally fail runs when divergence is detected (this is by design for CI safety).- Disabled cache entries (
disabled: true) are silently skipped.