CLI Reference

Complete reference for every tutti-ai command

The tutti-ai CLI scaffolds, runs, validates, schedules, and inspects Tutti projects. Every command is listed here with its flags, defaults, and a sample output.

Installation

npm install -g tutti-ai
# or invoke on demand with npx:
npx tutti-ai <command>

Global options

tutti-ai --version         # print version
tutti-ai --help            # list commands
tutti-ai <cmd> --help      # show command-specific help

All commands auto-load .env / .env.local via dotenv and exit 1 on unhandled errors. The CLI installs handlers for unhandledRejection and uncaughtException so the process always dies with a clean error message.

Command groups


Project lifecycle

tutti-ai init [project-name]

Create a new Tutti project with scaffolded files.

tutti-ai init my-project
tutti-ai init my-project --template coding-agent

Options:

FlagDescription
-t, --template <id>Template to use: minimal, coding-agent, research-agent, qa-pipeline, dev-team. See tutti-ai templates.

What it creates:

my-project/
├── tutti.score.ts    # Agent configuration
├── .env.example      # API key placeholders
├── .gitignore        # Ignores .env, node_modules, dist
├── package.json      # Dependencies and scripts
├── tsconfig.json     # TypeScript config
└── README.md         # Project description

If you omit the project name, the CLI prompts for it interactively.

tutti-ai templates

List every project template available to init.

tutti-ai templates

Sample output:

  Available Templates

  minimal           One agent, no voices — the simplest starting point
  coding-agent      TypeScript developer with filesystem + GitHub access
  research-agent    Researcher that saves structured notes to files
  qa-pipeline       Orchestrator + QA specialist with browser testing and HITL
  dev-team          Full team: orchestrator + coder + PM + QA with all voices

tutti-ai check [score]

Validate a score file without running it. Checks configuration, API keys, and installed voices.

tutti-ai check                    # uses ./tutti.score.ts
tutti-ai check ./path/to/score.ts # custom path

Sample output (passing):

Checking tutti.score.ts...

  ✔ Score file is valid
  ✔ Provider: AnthropicProvider (ANTHROPIC_API_KEY is set)
  ✔ 2 agents configured
  ✔ Voice: filesystem on coder (installed)
  ✔ Voice: github on coder (GITHUB_TOKEN is set)

All checks passed. Run tutti-ai run to start.

Sample output (failing):

Checking tutti.score.ts...

  ✘ Score validation failed
  Invalid score file:
  - provider: Required
  - agents.coder.max_turns: max_turns must be a positive number

What it checks:

  1. Score file exists and can be loaded
  2. Zod schema validation (required fields, types, cross-references)
  3. Provider API key is set in the environment
  4. Voices that need env vars (e.g. GITHUB_TOKEN) have them set

Exits non-zero on any failure — safe to use as a CI gate.

tutti-ai doctor [score]

Alias for check.

tutti-ai doctor
tutti-ai doctor ./path/to/score.ts

tutti-ai info [score]

Show project information — agents, voices, models, installed package versions, schedule configs, and feature flags. Useful for confirming your local setup matches what the score expects.

tutti-ai info
tutti-ai info ./path/to/score.ts

Sample output:

  Tutti Project Info

  Project:  my-agent 0.1.0

  Packages:
    @tuttiai/core               0.18.3
    @tuttiai/filesystem         0.4.2
    @tuttiai/types              0.7.0
  Score:    /path/to/tutti.score.ts

  Agents: (1)

    assistant (Assistant)
      Model:  claude-sonnet-4-20250514
      Voices: filesystem
      Flags:  streaming

Package versions are read from node_modules/<name>/package.json, so what you see is what your project will actually run — not the * / ^0.18.0 / workspace:* spec from package.json. When a package isn’t installed, the spec string is shown as a fallback.

Agent flags:

FlagMeaning
streamingAgent streams tokens during run
hitlAgent can request human input via allow_human_input
durableCheckpoints to Redis/Postgres between turns
scheduledHas a schedule block (cron / interval / one-shot)
structuredHas an outputSchema for typed output
guardrailsHas beforeRun or afterRun hooks

Running agents

tutti-ai run [score]

Run a Tutti score interactively in a REPL, or run a single turn non-interactively with -p.

tutti-ai run                         # REPL, uses ./tutti.score.ts
tutti-ai run ./path/to/score.ts      # REPL, custom path
tutti-ai run --watch                 # REPL with hot-reload on file changes
tutti-ai run -p "What is 2 + 2?"     # one-shot, prints result, exits

Options:

FlagDescription
-w, --watchReload the score and rebuild the runtime whenever the score file (or any file in its directory tree) changes on disk. Session history survives across reloads.
-p, --prompt <text>Run a single turn against the default assistant agent, print the result to stdout, and exit. Streaming is forced off so stdout contains only the final output — safe to pipe. Non-zero exit on error.

Interactive REPL

Tutti REPL — type "exit" to quit

> Hello!
Running agent: assistant

Hello! How can I help you today?

> exit
Goodbye!

Behaviour:

  • Loads .env automatically
  • Validates the API key for the detected provider
  • Maintains a session across REPL turns
  • Shows tool usage, errors, security warnings, and budget events
  • Streaming output — tokens print to the terminal as they arrive (spinner until first token)
  • Tool calls shown inline: [using: tool_name] / [done: tool_name]
  • exit, quit, or Ctrl+C leave gracefully — stdin raw mode and the cursor are restored so the shell prompt redraws cleanly

Watch mode

--watch reloads the score (and any file in the score’s directory tree, excluding node_modules, dist, and dotfiles) whenever it changes. Changes are debounced 200ms so editor saves that touch the file multiple times collapse into a single reload.

  • Changes take effect at turn boundaries, never mid-tool-call.
  • Session history is preserved across reloads — the REPL’s session_id carries over.
  • Syntax errors are recovered — if the reload fails to parse or validate, the error is printed and the REPL keeps using the previous config.
  • Trade-off: runtime-internal caches (tool cache, semantic memory) reset on reload.

One-shot mode (-p / --prompt)

Designed for scripting, CI smoke tests, and pipelines.

$ tutti-ai run -p "What is 2 + 2?"
4

$ tutti-ai run -p "summarize this README" > summary.txt

$ tutti-ai run --prompt "explain TypeScript generics in one sentence"
TypeScript generics let you write reusable code parameterized by types.

Always targets the agent keyed assistant in the score. Streaming and pino logs are silenced so stdout is clean. The REPL (readline, spinner, file watcher) is bypassed entirely — stdin is never read, so it’s safe inside non-TTY contexts (cron, CI, shell subprocesses).

tutti-ai serve [score]

Start the Tutti HTTP server — exposes your score as a REST API with SSE streaming.

tutti-ai serve                       # defaults to ./tutti.score.ts
tutti-ai serve ./custom-score.ts     # custom score path
tutti-ai serve --port 8080           # custom port
tutti-ai serve --watch               # reload on file changes
tutti-ai serve -a researcher         # expose a specific agent
tutti-ai serve --realtime            # also mount the OpenAI Realtime WebSocket

Options:

FlagDefaultDescription
-p, --port <number>3847Port to listen on.
-H, --host <address>0.0.0.0Interface to bind to.
-k, --api-key <key>TUTTI_API_KEY envBearer token clients must send.
-a, --agent <name>score.entry or first agentWhich agent the default endpoint routes to.
-w, --watchoffReload the score and restart the server on file changes.
--realtimeoffMount the realtime WebSocket + demo page (requires OPENAI_API_KEY and a realtime block on the agent).

Endpoints:

POST  /run              # non-streaming agent call
POST  /run/stream       # SSE streaming (tokens + events)
GET   /sessions/:id     # session history
GET   /health           # health check
GET   /cost/runs        # last 100 runs with cost (requires RunCostStore)
GET   /cost/budgets     # per-agent budget config + current spend
GET   /cost/tools       # per-tool call counts from the live tracer window

GET   /realtime         # WebSocket — proxies the OpenAI Realtime API (--realtime only)
GET   /realtime-demo    # public mic-capture demo page (--realtime only)

The realtime WebSocket authenticates inline against ?api_key=... because browsers cannot set Authorization on new WebSocket(url). Connections are rejected with 4404 / realtime_disabled_for_agent when the agent’s realtime config is undefined or false, and 4500 / missing_openai_api_key when OPENAI_API_KEY is unset. See the realtime guide for the full frame protocol and the RealtimeVoice setup.

Environment variables:

VariableRequiredDescription
TUTTI_API_KEYYesBearer token for authenticating requests.
ANTHROPIC_API_KEYProvider-dependentAnthropic API key.
OPENAI_API_KEYProvider-dependentOpenAI API key.
GEMINI_API_KEYProvider-dependentGemini API key.
TUTTI_ALLOWED_ORIGINSNoComma-separated CORS origins (default: *).
DATABASE_URLNoPostgreSQL URL for session persistence.
TUTTI_REDIS_URLNoRedis URL for durable checkpoints.

Example curl:

# Non-streaming
curl -X POST http://localhost:3847/run \
  -H "Authorization: Bearer $TUTTI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"input": "Summarize the latest AI news"}'

# Streaming SSE
curl -N -X POST http://localhost:3847/run/stream \
  -H "Authorization: Bearer $TUTTI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"input": "Write a short poem about TypeScript"}'

SIGINT/SIGTERM drain in-flight requests before exit. See the server docs for the full API schema.

tutti-ai studio [score]

Launch Tutti Studio — a local web UI for inspecting agent runs at localhost:4747.

tutti-ai studio                      # defaults to ./tutti.score.ts
tutti-ai studio ./path/to/score.ts   # custom score path
PORT=8080 tutti-ai studio            # custom port via env var

Opens the browser automatically. Four panels:

  1. Agent graph — SVG visualisation of the score’s agents and their delegation edges
  2. Live event stream — SSE feed of every llm:request, llm:response, tool:start, tool:end, budget:*, hitl:* event
  3. Session browser — sortable table of sessions with drill-down to message history
  4. Token usage — running total of input/output tokens and cost estimate

Also exposes a REST API for programmatic access: /api/score, /api/sessions, /api/run.

tutti-ai resume <session-id>

Resume a crashed or interrupted run from its last durable checkpoint. Requires durable: true on the original agent and a Redis or Postgres backend.

# Redis-backed checkpoint
export TUTTI_REDIS_URL=redis://127.0.0.1:6379/0
tutti-ai resume 811b3b38-9a1d-4b98-ab7d-57e4acaecdea --store redis

# Postgres-backed checkpoint
export TUTTI_PG_URL=postgres://localhost/tutti
tutti-ai resume 811b3b38-9a1d-4b98-ab7d-57e4acaecdea --store postgres

Options:

FlagDefaultDescription
--store <backend>redisWhich durable store the checkpoint was written to (redis or postgres).
-s, --score <path>./tutti.score.tsScore file to load — must match the one the original run used.
-a, --agent <name>score.entry or first agentAgent to resume.
-y, --yesfalseSkip the confirmation prompt.

Prints a checkpoint summary and asks Resume from turn N? (y/n) before handing off. On confirm, loads the score, reattaches the checkpoint store, seeds the session, and calls the runtime — picking up exactly where the previous run left off.

tutti-ai replay <session-id>

Time-travel debugger — navigate and replay a session from PostgreSQL. Lets you step forwards and backwards through every turn, inspect state at each point, and branch a new run from any historical turn.

export TUTTI_PG_URL=postgres://localhost/tutti
tutti-ai replay 811b3b38-9a1d-4b98-ab7d-57e4acaecdea
tutti-ai replay 811b3b38 --score ./custom.score.ts

Options:

FlagDefaultDescription
-s, --score <path>./tutti.score.tsScore file to load for “replay-from” branching.

Interactive controls: n (next turn), p (prev turn), b (branch from here), q (quit).


Deployment

tutti-ai deploy

Bundle your score and ship it to a hosting platform. Bundles include a Dockerfile, docker-compose.yml, and platform-specific config (fly.toml, railway.json) generated from a DeployConfig block on the agent.

tutti-ai deploy --target docker                # generate Docker bundle in ./tutti-deploy/
tutti-ai deploy --target railway               # bundle + railway up
tutti-ai deploy --target fly                   # bundle + fly deploy
tutti-ai deploy --target railway --dry-run     # print the plan without executing

Options:

FlagDefaultDescription
--target <platform>(required)One of docker, railway, fly.
--dry-runfalsePrint the plan + generated files without invoking the platform CLI.
-s, --score <path>./tutti.score.tsScore file to load.
-o, --out <dir>./tutti-deployOutput directory for generated bundle.

Pre-flight checks run before any I/O:

  • scanForSecrets() walks the score and every imported package’s entry file for process.env.X reads, filters Node built-ins, and emits errors for undeclared required vars.
  • validateSecrets() warns on secret-shaped names (*_KEY, *_TOKEN, *_SECRET) sitting in plaintext deploy.env. Plaintext API keys block the deploy fail-fast.
  • A .env.deploy.example is written next to your score on every real-run mode invocation so missing-vars are obvious before the platform rejects them.

Score-side configuration: add a deploy block to one agent in your score. Exactly one agent should declare it (the deploy entrypoint).

import { defineScore } from "@tuttiai/core";

export default defineScore({
  agents: {
    api: {
      name: "api",
      // ...
      deploy: {
        target: "fly",                              // optional override; --target wins
        region: "auto",
        scale: { min: 0, max: 3, memory: "512MB" },
        health: { path: "/health", interval_seconds: 30 },
        env: { LOG_LEVEL: "info" },
        secrets: ["ANTHROPIC_API_KEY", "DATABASE_URL"],
      },
    },
  },
});

tutti-ai deploy status

Platform-equivalent status check. Dispatches to railway status / fly status / docker compose ps based on the target.

tutti-ai deploy status

tutti-ai deploy logs

Stream platform logs.

tutti-ai deploy logs            # last 100 lines
tutti-ai deploy logs --tail     # follow

tutti-ai deploy rollback

Roll back to the previous deploy. Dispatches to the platform’s equivalent (railway rollback, fly releases rollback <prev>, docker compose pull && up -d for the prior tag).

tutti-ai deploy rollback

Voices

tutti-ai add <voice>

Install a voice package and print setup instructions.

tutti-ai add filesystem
tutti-ai add github
tutti-ai add playwright
tutti-ai add postgres
tutti-ai add @someone/custom-voice

Shorthands:

ShorthandPackageProvides
filesystem@tuttiai/filesystem7 file tools
github@tuttiai/github10 GitHub tools
playwright@tuttiai/playwright12 browser tools
web@tuttiai/web3 web tools (search, fetch, sitemap)
sandbox@tuttiai/sandbox4 code-execution tools
mcp@tuttiai/mcpMCP bridge
rag@tuttiai/ragRAG (ingest, chunk, embed, search)
postgrespgPostgreSQL session storage

For unknown names, it tries @tuttiai/<name> first, then the literal package name.

tutti-ai voices

List all available official voices with install status.

tutti-ai voices

Shows each voice with [official] badge, description, tags, and whether it’s installed in the current project.

tutti-ai search <query>

Search the voice registry for voices matching a query.

tutti-ai search browser
tutti-ai search database

Searches name, description, and tags (case-insensitive). Fetches the live registry from github.com/tuttiai/voices, falls back to a built-in list when offline.

tutti-ai publish

Publish the current voice to npm and the voice registry. Run from inside a voice directory.

tutti-ai publish            # full publish flow
tutti-ai publish --dry-run  # validate without publishing

Options:

FlagDescription
--dry-runRun all checks without actually publishing.

Five-step flow: pre-flight checks (structure, naming, build, tests, audit), pack dry-run, npm publish with confirmation, registry PR via GitHub API, success summary.


Packages

tutti-ai update

Update all installed @tuttiai/* packages to their latest versions. Auto-detects global CLI installs, npm/yarn/pnpm, and updates accordingly.

tutti-ai update

tutti-ai outdated

Show a table of installed @tuttiai/* packages with current vs latest versions. Does not install anything — just reports.

tutti-ai outdated

Sample output:

  PACKAGE                     CURRENT     LATEST      STATUS
  ────────────────────────────────────────────────────────────────
  @tuttiai/core               0.18.0      0.18.3      update available
  @tuttiai/filesystem         0.4.2       0.4.2       up to date

tutti-ai upgrade [voice]

Upgrade a specific voice — or all installed @tuttiai/* packages — to their latest versions.

tutti-ai upgrade                  # upgrade all @tuttiai packages
tutti-ai upgrade filesystem       # upgrade just @tuttiai/filesystem
tutti-ai upgrade @tuttiai/rag     # full package name also works

Scheduling

tutti-ai schedule [score]

Start the scheduler daemon — reads the score, registers every agent with a schedule block, and runs on their configured triggers (cron, interval, or one-shot datetime) until killed.

tutti-ai schedule                      # defaults to ./tutti.score.ts
tutti-ai schedule ./path/to/score.ts   # custom score path

Score example:

const score = defineScore({
  provider: new AnthropicProvider(),
  agents: {
    reporter: {
      name: "Reporter",
      system_prompt: "Generate a daily status report.",
      voices: [],
      schedule: {
        cron: "0 9 * * *",           // 9 AM daily
        input: "Generate the daily status report",
        max_runs: 30,                 // auto-disable after 30 runs
      },
    },
  },
});

Environment:

VariableRequiredDescription
TUTTI_PG_URLRecommendedPostgreSQL URL for durable schedule persistence. Falls back to in-memory (lost on restart).

Emits schedule:triggered, schedule:completed, and schedule:error events to stdout with timestamps.

tutti-ai schedules

Manage registered schedules — list, enable/disable, trigger manually, and view run history.

tutti-ai schedules list                        # show all registered schedules
tutti-ai schedules enable nightly-report       # re-enable a disabled schedule
tutti-ai schedules disable nightly-report      # disable without deleting
tutti-ai schedules trigger nightly-report      # run once immediately (testing)
tutti-ai schedules trigger nightly-report -s ./custom.score.ts
tutti-ai schedules runs nightly-report         # last 20 runs

list sample output:

  ID                  AGENT           TRIGGER               ENABLED   RUNS    CREATED
  ──────────────────────────────────────────────────────────────────────────────────────
  nightly-report      reporter        cron: 0 9 * * *       yes       12      2026-04-14
  health-check        monitor         every 30m             yes       48/100  2026-04-14

Requires TUTTI_PG_URL for the schedule store.


Observability

tutti-ai traces

Inspect OpenTelemetry spans emitted by a running tutti-ai serve process.

tutti-ai traces list                   # last 20 traces (most recent first)
tutti-ai traces show <trace-id>        # full span tree for one trace
tutti-ai traces tail                   # live-tail spans (Ctrl+C to exit)

Common options (all subcommands):

FlagDefaultDescription
-u, --url <url>http://127.0.0.1:3847Server URL.
-k, --api-key <key>TUTTI_API_KEY envBearer token.

traces show renders every span as an indented tree with duration, status, and attributes — ideal for spotting slow tool calls or flaky providers. traces tail streams spans live via SSE.


Cost analysis

The analyze, report, and budgets commands all talk to a running tutti-ai serve process and read from the runtime’s RunCostStore. Configure one when constructing the runtime:

import { TuttiRuntime, PostgresRunCostStore } from "@tuttiai/core";

const runtime = new TuttiRuntime(score, {
  runCostStore: new PostgresRunCostStore({
    connection_string: process.env.DATABASE_URL!,
  }),
});

Without a store the commands print a friendly “configure a RunCostStore” message instead of failing.

tutti-ai analyze costs

Top runs by cost, daily-spend sparkline, and burn-rate optimisation hints.

tutti-ai analyze costs                         # last 7 days, all agents
tutti-ai analyze costs --last 12h              # last 12 hours
tutti-ai analyze costs --last 30d --agent triage

Output:

Cost analysis since 2026-04-28
Daily spend: ▁▂▅█▃▁▂ (7 days)
Total: $4.2310 · 142.0k tokens · 87 runs

Top runs by cost
  RUN       AGENT             STARTED           TOKENS    COST
  ────────────────────────────────────────────────────────────
  abc12345  evaluator         2026-05-04 14:22  12.3k     $0.4521
  ...

Top tools (live window)
  Live window: 247 spans collected since 2026-05-05T14:00
  TOOL                    CALLS     AVG TOK/CALL    TOTAL TOKENS
  ──────────────────────────────────────────────────────────────────────
  read_file               47        1.0k            47.0k
  search_repo             8         800             6.4k

Hints
• Agent "triage" is burning $0.1234/day on average — at this rate the monthly $5.00 cap will be hit in ~28.4 days.
• Tool "read_file" was called 47 times in the live tracer window (since 2026-05-05T14:00). Repeated identical calls — consider enabling `cache: { enabled: true }` on the agent.
• 78% of recent tool-driven turns ran on small inputs (<800 avg tokens/call). Consider `model: 'auto'` plus a SmartProvider so cheap turns route to a smaller tier.

The “Top tools” section and the last two hints come from the in-memory tracer (/cost/tools), which is bounded to ~1000 spans and lost on server restart — the live-window line names the boundary so the numbers can’t be misread as authoritative all-time totals. The “burning $X/day” hint comes from the persistent RunCostStore and is accurate over the full --last window.

tutti-ai report costs

Exportable cost report.

tutti-ai report costs                          # text summary
tutti-ai report costs --format json --last 30d > costs.json
tutti-ai report costs --format csv  --last 7d  > costs.csv

CSV columns: run_id, agent_name, started_at, total_tokens, cost_usd. JSON output includes a totals block with run count, token total, and aggregate USD.

tutti-ai budgets

Per-agent budget config and current spend.

tutti-ai budgets                               # every agent with a budget
tutti-ai budgets --agent triage                # one agent

Output:

Agent: triage
  Per-run budget:       $0.0200
  Daily budget:         $5.0000 | today: $1.2400 (24.8%)
  Monthly budget:       $50.0000 | this month: $18.3000 (36.6%)

Memory

tutti-ai memory

Manage per-user memories (uses TUTTI_PG_URL like the runtime).

tutti-ai memory list --user alice                      # every memory for alice
tutti-ai memory search "favorite colour" --user alice  # semantic search
tutti-ai memory add "Prefers dark mode" --user alice   # manual insert
tutti-ai memory add "Allergic to shellfish" --user alice --importance 3
tutti-ai memory delete <memory-id> --user alice        # remove one memory
tutti-ai memory clear --user alice                     # delete all (confirms)
tutti-ai memory export --user alice > alice-memory.json

Common options:

FlagDescription
--user <user-id>End-user identifier (required for most subcommands).
--importance <n>1 (low), 2 (normal, default), 3 (high). Used by add.

Memories are ranked by importance and recency; search uses the same embedding model as the runtime’s semantic memory.


Evaluation

tutti-ai eval

Run evaluation suites, manage golden cases, and gate CI on regression.

tutti-ai eval list                                  # every golden case + latest status
tutti-ai eval record <session-id>                   # promote a past run to a golden case
tutti-ai eval run                                   # replay every case, report pass/fail
tutti-ai eval run --case <id>                       # filter by id (8-char prefix ok)
tutti-ai eval run --tag regression                  # filter by tag
tutti-ai eval run --ci                              # JUnit XML + exit 1 on any failure
tutti-ai eval suite ./my-suite.ts                   # v1 assertion-based suite runner

eval run options:

FlagDescription
--case <id>Run only the case with this id (full or 8-char prefix).
--tag <tag>Run only cases carrying this tag. Combinable with --case (ANDed).
--ciPlain one-line-per-case output, no ANSI, writes JUnit XML to .tutti/eval-results.xml, exits 1 if any case fails.
-s, --score <path>Score file to load (default ./tutti.score.ts).

eval record <session-id> captures the output + tool sequence of a past session and stores it as a pinned golden case. Subsequent eval run calls replay the agent against the captured input and score the result with configured scorers (exact, similarity, tool-sequence, or custom modules).

See the eval guide for scorer configuration and CI integration.


Human-in-the-loop

tutti-ai interrupts

Review and resolve approval-gated tool calls produced by agents with requireApproval configured.

tutti-ai interrupts                         # interactive TUI (default)
tutti-ai interrupts list                    # plain table, script-friendly
tutti-ai interrupts approve <interrupt-id>  # approve directly
tutti-ai interrupts deny <interrupt-id>     # deny directly

Common options:

FlagDefaultDescription
-u, --url <url>http://127.0.0.1:3847Server URL.
-k, --api-key <key>TUTTI_API_KEY envBearer token.

The interactive TUI shows each pending interrupt (agent, tool, input, reason) and accepts a (approve), d (deny), s (skip), or q (quit). list prints a plain table and exits — useful in shell scripts.

tutti-ai approve

Alias for the interactive TUI (tutti-ai interrupts with no subcommand). Same flags as interrupts.

tutti-ai approve

Error handling

The CLI installs global handlers for unhandledRejection and uncaughtException, ensuring a clean error message and exit code 1 on unexpected failures. All user-facing error messages are redacted through SecretsManager before printing — secrets (API keys, tokens, DB URLs) never appear in output.

Edit this page on GitHub →