Debug and Trace AI Agents

Open-source LLM observability — see every prompt, replay from any point, and let humans approve high-stakes actions.

docker compose up

# API:      http://localhost:5185
# Frontend: http://localhost:5173

Built for AI agent developers

See Everything

Every prompt, model response, tool call, and error — captured as a structured DAG with full payload, latency, and cost per step.

Replay From Any Point

Click any event, modify its payload, and fork into a new branch. Tracewire warns you about irreversible side-effects before replaying.

Human-in-the-Loop

Agents pause and wait for human approval before high-stakes actions. Approve, reject, or escalate — with full audit trail.

Zero-Touch SDKs

Drop in an adapter for LangChain, Vercel AI SDK, Semantic Kernel, or any LLM client. Your agent code stays unchanged.

Side-Effect Tracking

Flag emails, database writes, and API calls as side-effects. Tracewire surfaces them during replay so you know what can't be undone.

🔒

Multi-Tenant

Organizations, workspaces, scoped API keys, and workspace-level replay policies. Built for teams from day one.

Debug AI Agents and LLM Workflows

Tracewire helps developers trace prompts, model responses, tool calls, and errors across AI agent frameworks — from prototype to production.

1

Instrument

Add a Tracewire adapter to your LLM client or agent framework. Supports LangChain, Vercel AI SDK, Semantic Kernel, and more — no code changes required for prompt debugging and OpenAI tracing.

2

Observe

Every LLM call, tool invocation, and agent decision is captured as a structured DAG. Inspect payloads, latency, and cost at each step for complete AI agent debugging and LLM tracing.

3

Replay & Control

Click any event and replay from that point with modified inputs. Add human-in-the-loop approval gates for high-stakes actions. Debug agent workflow issues by forking and comparing branches.

Trace LangChain, CrewAI, AutoGen & More

Tracewire provides framework-specific adapters that capture every step of your agent's execution — no code changes needed.

LangChain Tracing

Capture every chain step, tool call, and model response from LangChain agents with TracewireCallbackHandler. Debug prompt chains, inspect intermediate outputs, and replay failed LangChain runs.

Vercel AI SDK Tracing

Wrap your Vercel AI SDK language model with wrapLanguageModel() to automatically trace all generateText and streamText calls. Full OpenAI tracing with token counts and latency.

AutoGen Agent Tracing

Add TracewireAutoGenMiddleware to capture multi-agent conversations, tool calls, and decision points in AutoGen workflows. Debug complex agent interactions step by step.

CrewAI Tracing

Instrument CrewAI crews with TracewireCrewCallback to trace task delegation, agent reasoning, and tool execution across your entire crew workflow.

Semantic Kernel Tracing

Use SemanticKernelAdapter to capture plugin invocations, planner steps, and LLM calls in .NET Semantic Kernel applications. Full support for AI agent debugging in the .NET ecosystem.

LangChain.js Tracing

Trace LangChain.js agents in TypeScript with createLangChainCallback(). Captures chain execution, retriever calls, and model responses for end-to-end LLM tracing.

Three SDKs, seven adapters

Instrument your agent in the language you already use.

from tracewire import trace

async with trace("my-agent", api_key="your-key") as t:
    t.log_event("Prompt", {"content": "hello"})
    t.log_event("ModelResponse", {"content": response}, latency_ms=450)

    # Agent pauses — reviewer sees Approve/Reject in the UI
    decision = await t.pause_for_human(timeout=120)
import { trace } from "tracewire-sdk";
import { wrapLanguageModel } from "tracewire-sdk/adapters/ai-sdk";

// Wrap your Vercel AI SDK model — everything auto-captured
await trace("my-agent", async (t) => {
  const model = wrapLanguageModel(openai("gpt-4o"), t);
  const { text } = await generateText({ model, prompt: "Hello!" });
}, { apiKey: "your-key" });
using Tracewire.Sdk;
using Tracewire.Sdk.Adapters;

await using var t = await TracewireTrace.StartAsync("my-agent", apiKey: "your-key");
var llm = new ChatClientAdapter(t, "gpt-4o");

var response = await llm.CallAsync("Hello!", async prompt =>
{
    var result = await openAiClient.CompleteChatAsync([new UserChatMessage(prompt)]);
    return result.Value.Content[0].Text;
});
FrameworkLanguageAdapter
LangChainPythonTracewireCallbackHandler
AutoGenPythonTracewireAutoGenMiddleware
CrewAIPythonTracewireCrewCallback
Vercel AI SDKTypeScriptwrapLanguageModel()
LangChain.jsTypeScriptcreateLangChainCallback()
Semantic Kernel.NETSemanticKernelAdapter
Any LLM client.NETChatClientAdapter

Start tracing in 60 seconds

docker compose up — then open http://localhost:5173

View on GitHub