API Reference

TypeScript API reference for Tutti packages

Tutti is fully typed with TypeScript. This page provides a quick reference for the main exports.

@tuttiai/core

The runtime package. Main exports:

Runtime & orchestration

ExportDescription
TuttiRuntimeMain runtime — takes a score, runs agents
AgentRunnerLow-level agentic loop (used internally by the runtime)
AgentRouterMulti-agent orchestration with delegation or parallel fan-out
TuttiGraph, defineGraph, GraphBuilder, ENDExplicit directed-graph routing
EventBusTyped event emitter

Sessions & memory

ExportDescription
InMemorySessionStoreDefault in-process session storage
PostgresSessionStorePostgreSQL session persistence
InMemorySemanticStoreIn-memory semantic (long-term) memory
MemoryUserMemoryStoreIn-memory user memory ({ user_id, content, importance })
PostgresUserMemoryStorePostgres user memory — survives restarts, searchable by tutti-ai memory

Durability & scheduling

ExportDescription
MemoryCheckpointStore, RedisCheckpointStore, PostgresCheckpointStoreDurable checkpoints for durable: true agents
SchedulerEngine, MemoryScheduleStore, PostgresScheduleStoreCron / interval / one-shot scheduler for agents with a schedule block

Human-in-the-loop

ExportDescription
MemoryInterruptStore, PostgresInterruptStoreApproval gates for requireApproval tool calls
globMatchGlob helper used by approval rules

Guardrails & evaluation

ExportDescription
profanityFilter(), piiDetector(), topicBlocker()Built-in beforeRun / afterRun guardrail factories
GoldenRunner, JsonFileGoldenStoreGolden-dataset CI regression runner
ExactScorer, SimilarityScorer, ToolSequenceScorerBuilt-in scorers for golden cases
EvalRunnerv1 assertion-based suite runner

Observability

ExportDescription
getTuttiTracer()Get the process-wide TuttiTracer instance
TuttiTracerIn-process span tracer (always on) with pluggable exporters
JsonFileExporter, OTLPExporterSpan exporters shipped from @tuttiai/telemetry
MODEL_PRICES, registerModelPrice()Cost-estimate table for LLM pricing
getCurrentTraceId(), getCurrentSpanId()Correlate logs with in-flight spans
createLogger(name)Create a named pino logger instance

Security

ExportDescription
SecretsManagerAPI key redaction and env var access
PermissionGuardVoice permission enforcement
PromptGuardPrompt-injection detection and tool-result wrapping
TokenBudgetToken / cost tracking and budget enforcement

Score authoring

ExportDescription
defineScore(config)Typed identity function for score definitions
validateScore(config)Zod-validate a score config object
ScoreLoaderDynamic import + Zod validation for score files

Caching

ExportDescription
InMemoryToolCacheTTL + LRU tool-result cache (ToolCache impl)
DEFAULT_WRITE_TOOLSTool names never cached (e.g. write_file, create_issue)

Hooks

ExportDescription
createLoggingHook, createCacheHook, createBlocklistHookReady-made hooks implementing TuttiHooks

Telemetry bootstrap (OTLP export)

ExportDescription
initTelemetry(config)Wire TuttiTracer to an OTLP exporter using score.telemetry
shutdownTelemetry()Flush pending spans and close the exporter cleanly

Providers

ExportDescription
AnthropicProviderClaude API provider
OpenAIProviderOpenAI / GPT provider
GeminiProviderGoogle Gemini provider

@tuttiai/types

Type definitions. All types are re-exported from @tuttiai/core for convenience.

Core types

import type {
  // Score
  ScoreConfig,
  TelemetryConfig,
  MemoryConfig,
  ParallelEntryConfig,   // { type: "parallel"; agents: string[] }

  // Agent
  AgentConfig,
  AgentResult,
  BudgetConfig,
  AgentCacheConfig,      // per-agent cache opt-in
  ParallelAgentResult,   // aggregate result from runParallelWithSummary

  // Voice
  Voice,
  Tool,
  ToolResult,
  ToolContext,
  VoiceContext,
  Permission,     // "network" | "filesystem" | "shell" | "browser"

  // LLM
  LLMProvider,
  ChatRequest,
  ChatResponse,
  ChatMessage,
  ContentBlock,
  TokenUsage,
  StreamChunk,

  // Session
  Session,
  SessionStore,

  // Events
  TuttiEvent,
  TuttiEventType,
  TuttiEventHandler,
} from "@tuttiai/types";

TuttiRuntime

class TuttiRuntime {
  constructor(score: ScoreConfig);
  readonly events: EventBus;
  readonly toolCache: ToolCache;       // attached InMemoryToolCache by default
  get score(): ScoreConfig;
  run(agent_name: string, input: string, session_id?: string): Promise<AgentResult>;
  getSession(id: string): Session | undefined;
}

AgentRouter

class AgentRouter {
  constructor(score: ScoreConfig);
  readonly events: EventBus;

  /** Delegation mode: runs entry orchestrator. Parallel mode: fans input out
   *  to every agent in the ParallelEntryConfig and returns a merged AgentResult. */
  run(input: string, session_id?: string): Promise<AgentResult>;

  /** Run multiple agents simultaneously with independent sessions. One
   *  failure never blocks the others; `timeout_ms` races each agent. */
  runParallel(
    inputs: { agent_id: string; input: string }[],
    options?: { timeout_ms?: number },
  ): Promise<Map<string, AgentResult>>;

  /** Same as runParallel but returns the full aggregate with rollup metrics
   *  (merged_output, total_usage, total_cost_usd, duration_ms). */
  runParallelWithSummary(
    inputs: { agent_id: string; input: string }[],
    options?: { timeout_ms?: number },
  ): Promise<ParallelAgentResult>;
}

AgentResult

interface AgentResult {
  session_id: string;
  output: string;
  messages: ChatMessage[];
  turns: number;
  usage: TokenUsage;
}

interface ParallelAgentResult {
  results: Map<string, AgentResult>;
  merged_output: string;      // "[agent_id] output\n\n[agent_id] output ..."
  total_usage: TokenUsage;
  total_cost_usd: number;
  duration_ms: number;
}

ToolCache

interface ToolCache {
  get(tool: string, input: unknown): Promise<ToolResult | null>;
  set(tool: string, input: unknown, result: ToolResult, ttl_ms?: number): Promise<void>;
  invalidate(tool: string, input?: unknown): Promise<void>;
  clear(): Promise<void>;
}

The default InMemoryToolCache uses sha256 keys, a 5-minute TTL, and 1000-entry LRU eviction. Write / side-effect tools listed in DEFAULT_WRITE_TOOLS (write_file, delete_file, move_file, create_issue, comment_on_issue) are never cached. Cache keys are internally scoped by agent_name so one agent’s tool output cannot be consumed by another agent with a different trust model.

LLMProvider

The interface all providers implement:

interface LLMProvider {
  chat(request: ChatRequest): Promise<ChatResponse>;
  stream(request: ChatRequest): AsyncIterable<StreamChunk>;
}

StreamChunk

Yielded by provider.stream():

interface StreamChunk {
  type: "text" | "tool_use" | "usage";
  text?: string;                       // present when type === "text"
  tool?: Omit<ToolUseBlock, "type">;   // present when type === "tool_use"
  usage?: TokenUsage;                  // present when type === "usage"
  stop_reason?: StopReason;            // present when type === "usage"
}

@tuttiai/router

Smart model routing — meta-provider that picks the cheapest configured model per turn based on task difficulty, the agent’s destructive-tool count, and the active token budget. See the Smart Routing guide for usage and policy choices.

Provider

ExportDescription
SmartProviderMeta-LLMProvider — classifies each request and dispatches to a configured tier; supports previewDecision(), getLastDecision(), and per-call force_tier / force_reason overrides

Classifiers

ExportDescription
HeuristicClassifierDefault — input length, code detection, complexity keywords, tool count, destructive-tool premium. ~1ms, $0 / call
LLMClassifierAsks a small/cheap LLM (configurable via classifier_provider, defaults to the small tier) for a one-word difficulty label per turn

Types

ExportDescription
Tier"small", "medium", "large", or "fallback"
ModelTier{ tier, provider, model, max_context?, pricing? } — one entry per configured model
ClassifierStrategy"heuristic", "llm", or "embedding" (placeholder for a follow-up release)
RoutingPolicy"cost-optimised" (default), "quality-first", or "balanced" — biases tier thresholds
SmartProviderConfigConstructor options for SmartProvider (tiers, classifier, policy, max_cost_per_run_usd, auto_escalate, classifier_provider, on_decision, on_fallback)
RoutingDecisionPer-call result: { tier, model, reason, classifier, estimated_input_tokens, estimated_cost_usd } — also emitted on the router:decision event
ClassifierContextRouting-only signals passed to a classifier (policy, voices loaded, turn index, remaining budget, previous stop reason, destructive-tool count)
ClassifierStrategy interface — classify(req, ctx) => Promise<Tier>
ChatOverride{ force_tier?, force_reason? } for SmartProvider.chat()

@tuttiai/mcp

MCP bridge voice — wraps any MCP server as a Tutti voice.

import { McpVoice } from "@tuttiai/mcp";

const voice = new McpVoice({
  server: "npx @playwright/mcp",   // command to start the MCP server
  args: [],                         // additional CLI args
  env: {},                          // env vars for server process
  name: "my-mcp",                   // override voice name
});

Tools are discovered dynamically via client.listTools() during voice.setup().

Edit this page on GitHub →