Tutti is fully typed with TypeScript. This page provides a quick reference for the main exports.
@tuttiai/core
The runtime package. Main exports:
Runtime & orchestration
| Export | Description |
|---|
TuttiRuntime | Main runtime — takes a score, runs agents |
AgentRunner | Low-level agentic loop (used internally by the runtime) |
AgentRouter | Multi-agent orchestration with delegation or parallel fan-out |
TuttiGraph, defineGraph, GraphBuilder, END | Explicit directed-graph routing |
EventBus | Typed event emitter |
Sessions & memory
| Export | Description |
|---|
InMemorySessionStore | Default in-process session storage |
PostgresSessionStore | PostgreSQL session persistence |
InMemorySemanticStore | In-memory semantic (long-term) memory |
MemoryUserMemoryStore | In-memory user memory ({ user_id, content, importance }) |
PostgresUserMemoryStore | Postgres user memory — survives restarts, searchable by tutti-ai memory |
Durability & scheduling
| Export | Description |
|---|
MemoryCheckpointStore, RedisCheckpointStore, PostgresCheckpointStore | Durable checkpoints for durable: true agents |
SchedulerEngine, MemoryScheduleStore, PostgresScheduleStore | Cron / interval / one-shot scheduler for agents with a schedule block |
Human-in-the-loop
| Export | Description |
|---|
MemoryInterruptStore, PostgresInterruptStore | Approval gates for requireApproval tool calls |
globMatch | Glob helper used by approval rules |
Guardrails & evaluation
| Export | Description |
|---|
profanityFilter(), piiDetector(), topicBlocker() | Built-in beforeRun / afterRun guardrail factories |
GoldenRunner, JsonFileGoldenStore | Golden-dataset CI regression runner |
ExactScorer, SimilarityScorer, ToolSequenceScorer | Built-in scorers for golden cases |
EvalRunner | v1 assertion-based suite runner |
Observability
| Export | Description |
|---|
getTuttiTracer() | Get the process-wide TuttiTracer instance |
TuttiTracer | In-process span tracer (always on) with pluggable exporters |
JsonFileExporter, OTLPExporter | Span exporters shipped from @tuttiai/telemetry |
MODEL_PRICES, registerModelPrice() | Cost-estimate table for LLM pricing |
getCurrentTraceId(), getCurrentSpanId() | Correlate logs with in-flight spans |
createLogger(name) | Create a named pino logger instance |
Security
| Export | Description |
|---|
SecretsManager | API key redaction and env var access |
PermissionGuard | Voice permission enforcement |
PromptGuard | Prompt-injection detection and tool-result wrapping |
TokenBudget | Token / cost tracking and budget enforcement |
Score authoring
| Export | Description |
|---|
defineScore(config) | Typed identity function for score definitions |
validateScore(config) | Zod-validate a score config object |
ScoreLoader | Dynamic import + Zod validation for score files |
Caching
| Export | Description |
|---|
InMemoryToolCache | TTL + LRU tool-result cache (ToolCache impl) |
DEFAULT_WRITE_TOOLS | Tool names never cached (e.g. write_file, create_issue) |
Hooks
| Export | Description |
|---|
createLoggingHook, createCacheHook, createBlocklistHook | Ready-made hooks implementing TuttiHooks |
Telemetry bootstrap (OTLP export)
| Export | Description |
|---|
initTelemetry(config) | Wire TuttiTracer to an OTLP exporter using score.telemetry |
shutdownTelemetry() | Flush pending spans and close the exporter cleanly |
Providers
| Export | Description |
|---|
AnthropicProvider | Claude API provider |
OpenAIProvider | OpenAI / GPT provider |
GeminiProvider | Google Gemini provider |
@tuttiai/types
Type definitions. All types are re-exported from @tuttiai/core for convenience.
Core types
import type {
// Score
ScoreConfig,
TelemetryConfig,
MemoryConfig,
ParallelEntryConfig, // { type: "parallel"; agents: string[] }
// Agent
AgentConfig,
AgentResult,
BudgetConfig,
AgentCacheConfig, // per-agent cache opt-in
ParallelAgentResult, // aggregate result from runParallelWithSummary
// Voice
Voice,
Tool,
ToolResult,
ToolContext,
VoiceContext,
Permission, // "network" | "filesystem" | "shell" | "browser"
// LLM
LLMProvider,
ChatRequest,
ChatResponse,
ChatMessage,
ContentBlock,
TokenUsage,
StreamChunk,
// Session
Session,
SessionStore,
// Events
TuttiEvent,
TuttiEventType,
TuttiEventHandler,
} from "@tuttiai/types";
TuttiRuntime
class TuttiRuntime {
constructor(score: ScoreConfig);
readonly events: EventBus;
readonly toolCache: ToolCache; // attached InMemoryToolCache by default
get score(): ScoreConfig;
run(agent_name: string, input: string, session_id?: string): Promise<AgentResult>;
getSession(id: string): Session | undefined;
}
AgentRouter
class AgentRouter {
constructor(score: ScoreConfig);
readonly events: EventBus;
/** Delegation mode: runs entry orchestrator. Parallel mode: fans input out
* to every agent in the ParallelEntryConfig and returns a merged AgentResult. */
run(input: string, session_id?: string): Promise<AgentResult>;
/** Run multiple agents simultaneously with independent sessions. One
* failure never blocks the others; `timeout_ms` races each agent. */
runParallel(
inputs: { agent_id: string; input: string }[],
options?: { timeout_ms?: number },
): Promise<Map<string, AgentResult>>;
/** Same as runParallel but returns the full aggregate with rollup metrics
* (merged_output, total_usage, total_cost_usd, duration_ms). */
runParallelWithSummary(
inputs: { agent_id: string; input: string }[],
options?: { timeout_ms?: number },
): Promise<ParallelAgentResult>;
}
AgentResult
interface AgentResult {
session_id: string;
output: string;
messages: ChatMessage[];
turns: number;
usage: TokenUsage;
}
interface ParallelAgentResult {
results: Map<string, AgentResult>;
merged_output: string; // "[agent_id] output\n\n[agent_id] output ..."
total_usage: TokenUsage;
total_cost_usd: number;
duration_ms: number;
}
interface ToolCache {
get(tool: string, input: unknown): Promise<ToolResult | null>;
set(tool: string, input: unknown, result: ToolResult, ttl_ms?: number): Promise<void>;
invalidate(tool: string, input?: unknown): Promise<void>;
clear(): Promise<void>;
}
The default InMemoryToolCache uses sha256 keys, a 5-minute TTL, and
1000-entry LRU eviction. Write / side-effect tools listed in
DEFAULT_WRITE_TOOLS (write_file, delete_file, move_file,
create_issue, comment_on_issue) are never cached. Cache keys are
internally scoped by agent_name so one agent’s tool output cannot be
consumed by another agent with a different trust model.
LLMProvider
The interface all providers implement:
interface LLMProvider {
chat(request: ChatRequest): Promise<ChatResponse>;
stream(request: ChatRequest): AsyncIterable<StreamChunk>;
}
StreamChunk
Yielded by provider.stream():
interface StreamChunk {
type: "text" | "tool_use" | "usage";
text?: string; // present when type === "text"
tool?: Omit<ToolUseBlock, "type">; // present when type === "tool_use"
usage?: TokenUsage; // present when type === "usage"
stop_reason?: StopReason; // present when type === "usage"
}
@tuttiai/router
Smart model routing — meta-provider that picks the cheapest configured model per turn based on task difficulty, the agent’s destructive-tool count, and the active token budget. See the Smart Routing guide for usage and policy choices.
Provider
| Export | Description |
|---|
SmartProvider | Meta-LLMProvider — classifies each request and dispatches to a configured tier; supports previewDecision(), getLastDecision(), and per-call force_tier / force_reason overrides |
Classifiers
| Export | Description |
|---|
HeuristicClassifier | Default — input length, code detection, complexity keywords, tool count, destructive-tool premium. ~1ms, $0 / call |
LLMClassifier | Asks a small/cheap LLM (configurable via classifier_provider, defaults to the small tier) for a one-word difficulty label per turn |
Types
| Export | Description |
|---|
Tier | "small", "medium", "large", or "fallback" |
ModelTier | { tier, provider, model, max_context?, pricing? } — one entry per configured model |
ClassifierStrategy | "heuristic", "llm", or "embedding" (placeholder for a follow-up release) |
RoutingPolicy | "cost-optimised" (default), "quality-first", or "balanced" — biases tier thresholds |
SmartProviderConfig | Constructor options for SmartProvider (tiers, classifier, policy, max_cost_per_run_usd, auto_escalate, classifier_provider, on_decision, on_fallback) |
RoutingDecision | Per-call result: { tier, model, reason, classifier, estimated_input_tokens, estimated_cost_usd } — also emitted on the router:decision event |
ClassifierContext | Routing-only signals passed to a classifier (policy, voices loaded, turn index, remaining budget, previous stop reason, destructive-tool count) |
Classifier | Strategy interface — classify(req, ctx) => Promise<Tier> |
ChatOverride | { force_tier?, force_reason? } for SmartProvider.chat() |
@tuttiai/mcp
MCP bridge voice — wraps any MCP server as a Tutti voice.
import { McpVoice } from "@tuttiai/mcp";
const voice = new McpVoice({
server: "npx @playwright/mcp", // command to start the MCP server
args: [], // additional CLI args
env: {}, // env vars for server process
name: "my-mcp", // override voice name
});
Tools are discovered dynamically via client.listTools() during voice.setup().