Skip to main content

DeepSeek

DeepSeek models via their OpenAI-compatible API. Supports streaming, generate, and function calling.

Install

pip install synapsekit[openai]

Uses the openai SDK with a custom base URL.

Usage

from synapsekit.llm.deepseek import DeepSeekLLM
from synapsekit import LLMConfig

config = LLMConfig(
model="deepseek-chat",
api_key="sk-...",
provider="deepseek",
)

llm = DeepSeekLLM(config)

# Streaming
async for token in llm.stream("Explain async/await in Python"):
print(token, end="")

# Generate
response = await llm.generate("What is DeepSeek?")

Available models

ModelDescription
deepseek-chatGeneral-purpose chat model
deepseek-reasonerEnhanced reasoning capabilities

Function calling

result = await llm.call_with_tools(
messages=[{"role": "user", "content": "Calculate 15% tip on $85"}],
tools=[...],
)

Custom base URL

For self-hosted or proxy deployments:

llm = DeepSeekLLM(config, base_url="http://localhost:8000")

Auto-detection

The RAG facade auto-detects DeepSeek for deepseek-* model names:

from synapsekit import RAG

rag = RAG(model="deepseek-chat", api_key="sk-...")