Navigate your
AI agent's path

See every LLM call, replay from any point, and let humans approve high-stakes actions. Open-source observability for AI agents.

docker compose up

# API:      http://localhost:5185
# Frontend: http://localhost:5173

Built for AI agent developers

See Everything

Every prompt, model response, tool call, and error — captured as a structured DAG with full payload, latency, and cost per step.

Replay From Any Point

Click any event, modify its payload, and fork into a new branch. Tracewire warns you about irreversible side-effects before replaying.

Human-in-the-Loop

Agents pause and wait for human approval before high-stakes actions. Approve, reject, or escalate — with full audit trail.

Zero-Touch SDKs

Drop in an adapter for LangChain, Vercel AI SDK, Semantic Kernel, or any LLM client. Your agent code stays unchanged.

Side-Effect Tracking

Flag emails, database writes, and API calls as side-effects. Tracewire surfaces them during replay so you know what can't be undone.

🔒

Multi-Tenant

Organizations, workspaces, scoped API keys, and workspace-level replay policies. Built for teams from day one.

Three SDKs, seven adapters

Instrument your agent in the language you already use.

from tracewire import trace

async with trace("my-agent", api_key="your-key") as t:
    t.log_event("Prompt", {"content": "hello"})
    t.log_event("ModelResponse", {"content": response}, latency_ms=450)

    # Agent pauses — reviewer sees Approve/Reject in the UI
    decision = await t.pause_for_human(timeout=120)
import { trace } from "tracewire-sdk";
import { wrapLanguageModel } from "tracewire-sdk/adapters/ai-sdk";

// Wrap your Vercel AI SDK model — everything auto-captured
await trace("my-agent", async (t) => {
  const model = wrapLanguageModel(openai("gpt-4o"), t);
  const { text } = await generateText({ model, prompt: "Hello!" });
}, { apiKey: "your-key" });
using Tracewire.Sdk;
using Tracewire.Sdk.Adapters;

await using var t = await TracewireTrace.StartAsync("my-agent", apiKey: "your-key");
var llm = new ChatClientAdapter(t, "gpt-4o");

var response = await llm.CallAsync("Hello!", async prompt =>
{
    var result = await openAiClient.CompleteChatAsync([new UserChatMessage(prompt)]);
    return result.Value.Content[0].Text;
});
FrameworkLanguageAdapter
LangChainPythonTracewireCallbackHandler
AutoGenPythonTracewireAutoGenMiddleware
CrewAIPythonTracewireCrewCallback
Vercel AI SDKTypeScriptwrapLanguageModel()
LangChain.jsTypeScriptcreateLangChainCallback()
Semantic Kernel.NETSemanticKernelAdapter
Any LLM client.NETChatClientAdapter

Start tracing in 60 seconds

docker compose up — then open http://localhost:5173

View on GitHub