FAQ
Below are common questions about SynapseKit.
What Python version is required?
Answer
SynapseKit requires Python 3.14 or newer.
How is SynapseKit different from LangChain?
Answer
SynapseKit is async-native and streaming-first from the ground up — every public API is async, streaming is the default. It has only 2 hard dependencies (numpy and rank-bm25), compared to LangChain's heavy dependency tree. No chains, no magic callbacks, no global state — just plain Python classes you can read, subclass, and override. See the Feature Parity Report for a detailed comparison.
Can I use it with local models (Ollama)?
Answer
Yes. Install with pip install synapsekit[ollama] and use OllamaLLM or pass provider="ollama" to the RAG facade. See the Ollama docs for details.
How do I add a custom LLM provider?
Answer
Extend BaseLLM and implement the stream() method. All other methods (generate(), stream_with_messages(), generate_with_messages()) are derived from it. See the LLM Overview for the full interface.
Does it work with FastAPI?
Answer
Yes. Since SynapseKit is fully async, it integrates naturally with FastAPI. Graph workflows also support SSE streaming via sse_stream() for real-time HTTP responses.
Is it production-ready?
Answer
Yes. SynapseKit includes LLM response caching (memory, SQLite, filesystem, Redis), exponential backoff retries, token-bucket rate limiting, structured output with Pydantic validation, and graph checkpointing for fault tolerance.
How can I contribute?
Answer
Check out the GitHub repo — open issues, submit pull requests, improve documentation, or add new integrations. See the Roadmap for planned features.