Skip to main content

Aleph Alpha

Aleph Alpha's Luminous and Pharia language models — European-built LLMs with strong German and multilingual capabilities.

Install

pip install synapsekit[aleph-alpha]

Setup

export ALEPH_ALPHA_API_KEY=your-api-key

Usage

from synapsekit.llm.aleph_alpha import AlephAlphaLLM
from synapsekit import LLMConfig
import os

config = LLMConfig(
model="luminous-supreme-control",
api_key=os.environ["ALEPH_ALPHA_API_KEY"],
provider="aleph-alpha",
)

llm = AlephAlphaLLM(config)

# Streaming
async for token in llm.stream("Erkläre maschinelles Lernen auf Deutsch"):
print(token, end="")

# Generate
response = await llm.generate("What are the Luminous model capabilities?")

Available models

ModelNotes
luminous-supreme-controlFlagship, instruction-tuned
luminous-supremeHighest quality
luminous-extended-controlBalanced, instruction-tuned
luminous-base-controlFast, lightweight
pharia-1-llm-7b-controlPharia 7B, instruction-tuned

Auto-detection

The RAG facade auto-detects Aleph Alpha for luminous-* and pharia-* model prefixes:

from synapsekit import RAG

rag = RAG(model="luminous-supreme-control", api_key="...")
rag.add("Your document text here")
answer = rag.ask_sync("Summarize this.")
tip

Luminous models are particularly strong for German-language tasks and EU-based deployments where data residency matters.