Agents
Build AI agents with tool use, function calling, MCP servers, and streaming step output.
📄️ Agents Overview
SynapseKit agents are async-first, tool-using AI systems that reason and act to complete tasks. An agent combines an LLM with a set of tools, loops until a task is complete, and tracks the full reasoning trace.
📄️ ReActAgent
ReActAgent implements the Reasoning + Acting pattern. It works with any BaseLLM — no native function calling required.
📄️ FunctionCallingAgent
FunctionCallingAgent uses native LLM function calling — OpenAI toolcalls or Anthropic tooluse. More reliable tool selection than ReAct, especially with multiple tools.
📄️ Built-in Tools
@tool decorator (recommended)
📄️ AgentExecutor
AgentExecutor is the recommended high-level entry point. It wraps ReActAgent or FunctionCallingAgent behind a consistent interface.
📄️ MCP (Model Context Protocol)
SynapseKit supports the Model Context Protocol for connecting to external tool servers, wrapping MCP tools for use with agents, and exposing your own tools as an MCP server.
📄️ Agent Cookbook
A collection of common agent patterns with full working code examples. Copy-paste and adapt these recipes for your use case.
📄️ Tool Authoring Guide
Write custom tools for SynapseKit agents using the @tool decorator or BaseTool class.
📄️ Streaming Agent Steps
Both ReActAgent and FunctionCallingAgent support stream_steps(), an async generator that yields structured step events as the agent reasons through a task. This enables real-time UIs, logging, and debugging.