Open-source LLM observability — see every prompt, replay from any point, and let humans approve high-stakes actions.
docker compose up
# API: http://localhost:5185
# Frontend: http://localhost:5173
Every prompt, model response, tool call, and error — captured as a structured DAG with full payload, latency, and cost per step.
Click any event, modify its payload, and fork into a new branch. Tracewire warns you about irreversible side-effects before replaying.
Agents pause and wait for human approval before high-stakes actions. Approve, reject, or escalate — with full audit trail.
Drop in an adapter for LangChain, Vercel AI SDK, Semantic Kernel, or any LLM client. Your agent code stays unchanged.
Flag emails, database writes, and API calls as side-effects. Tracewire surfaces them during replay so you know what can't be undone.
Organizations, workspaces, scoped API keys, and workspace-level replay policies. Built for teams from day one.
Tracewire helps developers trace prompts, model responses, tool calls, and errors across AI agent frameworks — from prototype to production.
Add a Tracewire adapter to your LLM client or agent framework. Supports LangChain, Vercel AI SDK, Semantic Kernel, and more — no code changes required for prompt debugging and OpenAI tracing.
Every LLM call, tool invocation, and agent decision is captured as a structured DAG. Inspect payloads, latency, and cost at each step for complete AI agent debugging and LLM tracing.
Click any event and replay from that point with modified inputs. Add human-in-the-loop approval gates for high-stakes actions. Debug agent workflow issues by forking and comparing branches.
Tracewire provides framework-specific adapters that capture every step of your agent's execution — no code changes needed.
Capture every chain step, tool call, and model response from LangChain agents with TracewireCallbackHandler. Debug prompt chains, inspect intermediate outputs, and replay failed LangChain runs.
Wrap your Vercel AI SDK language model with wrapLanguageModel() to automatically trace all generateText and streamText calls. Full OpenAI tracing with token counts and latency.
Add TracewireAutoGenMiddleware to capture multi-agent conversations, tool calls, and decision points in AutoGen workflows. Debug complex agent interactions step by step.
Instrument CrewAI crews with TracewireCrewCallback to trace task delegation, agent reasoning, and tool execution across your entire crew workflow.
Use SemanticKernelAdapter to capture plugin invocations, planner steps, and LLM calls in .NET Semantic Kernel applications. Full support for AI agent debugging in the .NET ecosystem.
Trace LangChain.js agents in TypeScript with createLangChainCallback(). Captures chain execution, retriever calls, and model responses for end-to-end LLM tracing.
Instrument your agent in the language you already use.
from tracewire import trace
async with trace("my-agent", api_key="your-key") as t:
t.log_event("Prompt", {"content": "hello"})
t.log_event("ModelResponse", {"content": response}, latency_ms=450)
# Agent pauses — reviewer sees Approve/Reject in the UI
decision = await t.pause_for_human(timeout=120)
import { trace } from "tracewire-sdk";
import { wrapLanguageModel } from "tracewire-sdk/adapters/ai-sdk";
// Wrap your Vercel AI SDK model — everything auto-captured
await trace("my-agent", async (t) => {
const model = wrapLanguageModel(openai("gpt-4o"), t);
const { text } = await generateText({ model, prompt: "Hello!" });
}, { apiKey: "your-key" });
using Tracewire.Sdk;
using Tracewire.Sdk.Adapters;
await using var t = await TracewireTrace.StartAsync("my-agent", apiKey: "your-key");
var llm = new ChatClientAdapter(t, "gpt-4o");
var response = await llm.CallAsync("Hello!", async prompt =>
{
var result = await openAiClient.CompleteChatAsync([new UserChatMessage(prompt)]);
return result.Value.Content[0].Text;
});
| Framework | Language | Adapter |
|---|---|---|
| LangChain | Python | TracewireCallbackHandler |
| AutoGen | Python | TracewireAutoGenMiddleware |
| CrewAI | Python | TracewireCrewCallback |
| Vercel AI SDK | TypeScript | wrapLanguageModel() |
| LangChain.js | TypeScript | createLangChainCallback() |
| Semantic Kernel | .NET | SemanticKernelAdapter |
| Any LLM client | .NET | ChatClientAdapter |