LLM Guides
These guides cover the most important patterns for working with language models in SynapseKit. Each guide is self-contained and ends with a complete working example you can run immediately.
Guides in this section
| Guide | What you'll build | Difficulty | Time |
|---|---|---|---|
| LLM Provider Comparison | Side-by-side benchmark across OpenAI, Anthropic, Groq, and Ollama | Beginner | ~15 min |
| Cost-Aware LLM Router | Complexity classifier + routing table + circuit breaker + budget guard | Intermediate | ~20 min |
| LLM Fallback Chains | Primary → secondary → tertiary failover with CircuitBreaker | Intermediate | ~15 min |
| Semantic Response Caching | SQLite and Redis cache backends, cache hit/miss metrics | Beginner | ~15 min |
| Structured Output with Pydantic | Pydantic BaseModel response schemas, JSON mode, field validation | Beginner | ~15 min |
Prerequisites
All guides assume you have SynapseKit installed:
pip install synapsekit
Individual guides list any additional provider extras (e.g. synapsekit[openai,groq]) in their Prerequisites section.
Which guide should I start with?
- New to SynapseKit? Start with LLM Provider Comparison to understand the unified interface.
- Concerned about costs? Go straight to Cost-Aware LLM Router.
- Building production services? Read LLM Fallback Chains and Semantic Response Caching.
- Need structured data from LLMs? See Structured Output with Pydantic.