Skip to main content

Ollama (Local)

Run open-source LLMs locally via Ollama. No API key required.

Install

# Install Ollama: https://ollama.com/download
ollama pull llama3

pip install synapsekit[ollama]

Via the RAG facade

from synapsekit import RAG

rag = RAG(model="llama3", api_key="", provider="ollama")
rag.add("Your document text here")

answer = rag.ask_sync("Summarize the document.")
print(answer)

Direct usage

from synapsekit.llm.ollama import OllamaLLM
from synapsekit.llm.base import LLMConfig

llm = OllamaLLM(LLMConfig(
model="llama3",
api_key="",
provider="ollama",
temperature=0.7,
max_tokens=512,
))

async for token in llm.stream("Explain async Python in one paragraph."):
print(token, end="", flush=True)

Supported models

Any model you have pulled with ollama pull:

ollama pull llama3
ollama pull mistral
ollama pull gemma2
ollama pull phi3
ollama pull codellama