Memory & Tracing
Persist conversation context and trace token usage across your LLM pipelines.
📄️ Conversation Memory
ConversationMemory maintains a sliding window of recent messages for multi-turn conversations.
📄️ Token Tracer
TokenTracer tracks token usage, latency, and estimated cost per LLM call.