Self-hosted

Run Tutti
on your own boxes.

The full Tutti runtime is Apache 2.0 and self-hostable. Same score files, same voices, same CLI as everywhere else. No licence keys, no telemetry, no feature gates — just npm install and ship.

Read the install docs →
What you get

Everything in one repo.

01

Apache 2.0, no asterisks.

The full runtime, all official voices, the CLI, and the HTTP server are open source. Use commercially, modify, redistribute. No telemetry, no licence keys, no feature gates.

02

TypeScript end-to-end.

Strict TypeScript across the runtime, the CLI, and every official voice. Score files are typed objects, not config strings; voice tools come with Zod schemas and inferred types — your IDE catches mistakes before you run.

03

Bring your own everything.

Pick your model provider, your database, your queue, your hosting. Tutti is opinionated about its own internals and unopinionated about your stack.

04

Production primitives on day one.

Permission scopes, secret redaction, prompt-injection guard, OTEL spans, eval CLI, HITL approval flows. Not a roadmap — already in the runtime.

60-second install

Four commands. One running agent.

No signup. No API key required for the local sandbox. Drop in your provider key only when you’re ready.

# 1. Scaffold a project
npx tutti-ai init my-agent
cd my-agent

# 2. Add a voice (or several)
tutti-ai add github playwright

# 3. Drop your provider key into .env
echo "ANTHROPIC_API_KEY=sk-ant-..." >> .env

# 4. Run
tutti-ai run
import { TuttiRuntime, AnthropicProvider, defineScore } from "@tuttiai/core";
import { GitHubVoice } from "@tuttiai/github";
import { PlaywrightVoice } from "@tuttiai/playwright";

export default defineScore({
  provider: new AnthropicProvider(),
  agents: {
    reviewer: {
      name: "Reviewer",
      model: "claude-sonnet-4-20250514",
      system_prompt: "You review pull requests against our style guide.",
      voices: [new GitHubVoice(), new PlaywrightVoice()],
      permissions: ["network"],
      budget: { max_tokens: 50_000, max_cost_usd: 1.0 },
    },
  },
});
CLI

Ten commands cover everything.

Every operational task — running, observing, evaluating, approving — has its own command. No SDK ceremony required.

Command What it does
tutti-ai init Scaffold a new project with example score, env, and config files.
tutti-ai add Add a voice from the registry — installs the npm package and registers it.
tutti-ai run Run an agent. -p "<prompt>" for one-shot, otherwise REPL.
tutti-ai serve Start the HTTP server: REST + SSE for runs, traces, interrupts.
tutti-ai studio Local web UI on localhost:4747 for trace replay and run inspection.
tutti-ai eval Golden-dataset runner. Record cases, run, fail CI on drift.
tutti-ai traces Tail OTEL spans live in the terminal.
tutti-ai interrupts TUI for approving/denying gated tool calls in real time.
tutti-ai memory Inspect and manage user-scoped persistent memory.
tutti-ai check Verify provider keys, voice prerequisites, and score validity.
Full CLI reference →
Environment

The .env you’ll actually need.

Tutti reads keys through SecretsManager — never through process.env directly. Keys are redacted from logs, events, and errors automatically.

Variable Purpose
ANTHROPIC_API_KEY Anthropic API key (required if you use AnthropicProvider).
OPENAI_API_KEY OpenAI API key.
DATABASE_URL Postgres connection string. Used by Postgres voice + persistent session/memory stores.
TUTTI_PG_URL Optional override — Postgres URL specifically for Tutti session and memory stores.
GITHUB_TOKEN Personal access token for the GitHub voice.
SLACK_BOT_TOKEN Bot token (xoxb-...) for the Slack voice.
STRIPE_SECRET_KEY Stripe secret key. Strongly prefer sk_test_ during development.
OTEL_EXPORTER_OTLP_ENDPOINT OTLP/HTTP collector URL for traces (Honeycomb, Tempo, Datadog, etc.).
Deploy

Run Tutti where you already run things.

Docker

Single container running `tutti-ai serve` on port 3847. Multi-stage Dockerfile in the repo. Healthcheck on /health.

Open →

docker-compose

Server + Postgres (pgvector) + Redis in one file. The repo ships a working docker-compose.yml — `docker compose up -d` and you are live.

Open →

Railway

One-click deploy config in `scripts/deploy/railway.json`. Add a Postgres add-on and your provider key.

Open →

Render

Blueprint in `scripts/deploy/render.yaml`. Spins up a web service plus a managed Postgres.

Open →

Bare metal

It is a Node process — `npx tutti-ai serve` behind nginx + systemd. No magic.

Open →

Anywhere Node 24+ runs

No native dependencies, no platform lock-in. If it can run Node, it can run Tutti.

Open →
Docker

A working Dockerfile to start from.

Single image, multi-stage builds available, runs tutti-ai serve on port 3847. Add a Postgres connection string and you have persistent sessions, memory, and traces.

# Dockerfile
FROM node:24-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
EXPOSE 3847
CMD ["npx", "tutti-ai", "serve"]
Security defaults

Eight layers between an LLM and your prod.

Self-hosting doesn’t mean rolling your own security. Every guardrail that ships in Tutti runs on your infrastructure too — no flag to enable, no premium tier required.

Permission model

Voices declare required scopes (filesystem, network, shell, browser). The runtime enforces them at every call site.

SecretsManager

No code touches `process.env` directly. Keys are redacted from logs, events, and error messages.

PromptGuard

Tool results pass through pattern-match + schema-validate + boundary-marker filters before the model sees them.

PathSanitizer

File operations reject `/etc/passwd`, `~/.ssh`, `/proc`, `/sys`, `/dev`, and traversal patterns.

UrlSanitizer

HTTP voices reject `file:`, `javascript:`, `data:`, and private network ranges.

HITL interrupt store

Destructive tool calls pause until a human approves via TUI, REST, or SSE.

Token budgets

Per-agent `max_tokens` / `max_cost_usd` halts runaway loops at the boundary.

Audit logs

Every run, LLM call, and tool execution emits a typed event you can sink to your warehouse.

Read the full security policy →
Before you go live

Pre-prod checklist.

Eight things to verify before you point real traffic at a Tutti runtime. None of these are optional in production.

Bug or feature?

GitHub Issues

The fastest way to get something fixed.

Open an issue →
Doing something hard?

Email

For deeper deploy / architecture questions.

hello@tutti-ai.com

Start conducting.

One install. Your first agent running in 60 seconds. No signup. No telemetry.