Skip to main content

Agents Overview

SynapseKit agents are async-first, tool-using AI systems that reason and act to complete tasks. An agent combines an LLM with a set of tools, loops until a task is complete, and tracks the full reasoning trace.

Core concepts

ConceptClassDescription
ToolBaseToolA single action the agent can take
RegistryToolRegistryLooks up tools by name, generates schemas
MemoryAgentMemoryRecords Thought→Action→Observation steps
ReActReActAgentPrompt-based reasoning loop, any LLM
Function CallingFunctionCallingAgentNative OpenAI/Anthropic tool use
ExecutorAgentExecutorUnified runner — picks the right agent

Quick start

import asyncio
from synapsekit import AgentExecutor, AgentConfig, CalculatorTool
from synapsekit.llm.openai import OpenAILLM
from synapsekit.llm.base import LLMConfig

llm = OpenAILLM(LLMConfig(model="gpt-4o-mini", api_key="sk-..."))

executor = AgentExecutor(AgentConfig(
llm=llm,
tools=[CalculatorTool()],
agent_type="function_calling",
))

answer = asyncio.run(executor.run("What is 2 ** 10 + 24?"))
print(answer) # "The answer is 1048."

Agent type selection guide

Choose your agent type based on your LLM and task requirements:

ScenarioRecommended typeWhy
OpenAI or Anthropic LLMfunction_callingNative tool_calls, more reliable
Any other LLM (Ollama, Mistral, etc.)reactWorks via structured text prompts
Need full control over loopreactEasy to inspect Thought/Action/Observation
Production with strict tool schemasfunction_callingTyped arguments, fewer hallucinations
Local/offline modelsreactNo function-calling API needed
MCP (Model Context Protocol) toolsmcpConnects to external MCP servers

Agent types

"react" — Works with any LLM. Uses a structured text prompt (Thought/Action/Observation). No native function calling required. Best for local models and providers without tool-use APIs.

"function_calling" — Requires OpenAILLM or AnthropicLLM. Uses native tool_calls / tool_use for more reliable tool selection and type-safe arguments.

"mcp" — Connects to external Model Context Protocol servers. Access any MCP-compatible tool (filesystem, databases, APIs) without writing wrapper code.

Built-in tools

SynapseKit includes 48 built-in tools organized by category:

Math and code

ToolClassDescription
CalculatorCalculatorToolSafe math eval (sqrt, trig, log, etc.)
Python REPLPythonREPLToolExecute Python with persistent namespace
ShellShellToolRun shell commands (use with care)
ToolClassExtraDescription
Web SearchWebSearchToolsynapsekit[search]DuckDuckGo web search
DuckDuckGoDuckDuckGoSearchToolsynapsekit[search]Text and news search
WikipediaWikipediaToolnoneSearch Wikipedia articles
ArxivArxivSearchToolnoneAcademic paper search
PubMedPubMedSearchToolnoneBiomedical literature search
TavilyTavilySearchToolsynapsekit[tavily]AI-optimized web search
Brave SearchBraveSearchToolnoneBrave Search API
YouTubeYouTubeSearchToolsynapsekit[youtube]YouTube video search
Bing SearchBingSearchToolnoneBing Web Search API v7
Wolfram AlphaWolframAlphaToolnoneWolfram Alpha short-answer API
Google SearchGoogleSearchToolsynapsekit[google-search]Google web search via SerpAPI

File and data

ToolClassDescription
File ReadFileReadToolRead local files
File WriteFileWriteToolWrite content to local files
File ListFileListToolList files in a directory
PDF ReaderPDFReaderToolExtract text from PDFs
JSON QueryJSONQueryToolQuery JSON with dot-notation paths
RegexRegexToolApply regex (findall, replace, split)
DateTimeDateTimeToolGet/format/parse dates and times

Database

ToolClassExtraDescription
SQL QuerySQLQueryToolsqlalchemy optionalSQL SELECT queries
SQL SchemaSQLSchemaInspectionToolsqlalchemy optionalInspect DB schema
GraphQLGraphQLToolsynapsekit[http]Execute GraphQL queries
HTTP RequestHTTPRequestToolnoneGET/POST/PUT/DELETE any endpoint

APIs and integrations

ToolClassExtraDescription
GitHub APIGitHubAPIToolnoneSearch repos, issues, PRs
SlackSlackToolnoneSend messages via webhook or bot token
NotionNotionToolsynapsekit[notion]Search, read, create, and append to Notion pages
EmailEmailToolnoneSend emails via SMTP
JiraJiraToolnoneSearch, create, comment on Jira issues
Google CalendarGoogleCalendarToolsynapsekit[gcal-tool]List, create, delete calendar events
AWS LambdaAWSLambdaToolsynapsekit[aws-lambda]Invoke Lambda functions
API BuilderAPIBuilderToolnoneExecute API calls from OpenAPI specs

AI and ML

ToolClassExtraDescription
SummarizationSummarizationToolLLM requiredSummarize text with an LLM
Sentiment AnalysisSentimentAnalysisToolLLM requiredAnalyze text sentiment
TranslationTranslationToolLLM requiredTranslate between languages
Image AnalysisImageAnalysisToolsynapsekit[openai]Analyze images with a vision LLM
Text to SpeechTextToSpeechToolsynapsekit[openai]Convert text to audio (OpenAI TTS)
Speech to TextSpeechToTextToolsynapsekit[openai]Transcribe audio (Whisper API/local)
Vector SearchVectorSearchToolnoneSimilarity search over a vector store
Human InputHumanInputToolnonePause to collect user input

ReActAgent vs FunctionCallingAgent vs MCPAgent

FeatureReActAgentFunctionCallingAgentMCPAgent
LLM requirementAny LLMOpenAI / Anthropic onlyAny LLM with function calling
Tool formatText promptJSON schema (tool_calls)MCP protocol
ReliabilityGoodExcellentDepends on MCP server
TracingThought/Action/ObsTool call historyTool call history
Best forFlexibility, local LLMsProduction, typed outputsEcosystem integrations
StreamingYesYesYes
Max stepsConfigurableConfigurableConfigurable

Multi-agent patterns

For complex tasks, coordinate multiple agents:

from synapsekit.multi_agent import HandoffChain, Crew

# Sequential: researcher → writer → reviewer
chain = HandoffChain([
researcher_agent,
writer_agent,
reviewer_agent,
])
result = await chain.run("Write a technical blog post about vector databases.")

# Parallel crew: multiple agents tackle sub-tasks simultaneously
crew = Crew(agents=[data_agent, chart_agent, summary_agent])
results = await crew.run("Analyze Q4 sales data.")

See Multi-Agent for full patterns.

Sync usage

executor = AgentExecutor(AgentConfig(llm=llm, tools=[CalculatorTool()]))
answer = executor.run_sync("What is sqrt(144)?")
print(answer) # "The square root of 144 is 12."

Cost and latency tips

  • Use gpt-4o-mini or llama-3.1-8b for agent loops — cheaper and faster per step
  • Set max_steps to cap runaway loops: AgentConfig(max_steps=10)
  • Use BudgetGuard to hard-stop on cost: AgentConfig(budget_usd=0.10)
  • Enable caching on the LLM to avoid re-calling identical sub-queries
  • For latency-sensitive agents, use Groq or Cerebras for the underlying LLM
from synapsekit import AgentConfig, BudgetGuard

config = AgentConfig(
llm=llm,
tools=[WebSearchTool(), CalculatorTool()],
agent_type="function_calling",
max_steps=15,
budget_guard=BudgetGuard(max_cost_usd=0.50),
)

Next steps