Skip to main content

LLM Guides

These guides cover the most important patterns for working with language models in SynapseKit. Each guide is self-contained and ends with a complete working example you can run immediately.

Guides in this section

GuideWhat you'll buildDifficultyTime
LLM Provider ComparisonSide-by-side benchmark across OpenAI, Anthropic, Groq, and OllamaBeginner~15 min
Cost-Aware LLM RouterComplexity classifier + routing table + circuit breaker + budget guardIntermediate~20 min
LLM Fallback ChainsPrimary → secondary → tertiary failover with CircuitBreakerIntermediate~15 min
Semantic Response CachingSQLite and Redis cache backends, cache hit/miss metricsBeginner~15 min
Structured Output with PydanticPydantic BaseModel response schemas, JSON mode, field validationBeginner~15 min

Prerequisites

All guides assume you have SynapseKit installed:

pip install synapsekit

Individual guides list any additional provider extras (e.g. synapsekit[openai,groq]) in their Prerequisites section.

Which guide should I start with?