Features Overview
What Makes Llumen Specialâ
Llumen is designed for users who want privacy and simplicity without sacrificing functionality. Here's what you get:
ð Performance Firstâ
- Sub-second cold starts: No waiting around
- Real-time token streaming: See responses as they're generated
- Minimal resource usage: Runs on Raspberry Pi and old laptops
- ~17MB binary size: Smaller than most images
- <128MB RAM usage: Leaves resources for other tasks
ðŽ Three Chat Modesâ
Normal Chat
Standard AI conversation for general queries and tasks
Web Search
Search the web and get AI-powered answers with sources
Deep Research
Autonomous agents that conduct in-depth research
ð Rich Media Supportâ
- PDF uploads: Chat about your documents
- LaTeX rendering: Perfect for math and scientific content
- Image generation: Create images directly in chat
- Markdown support: Full formatting capabilities
ðĻ Beautiful Interfaceâ
- Default
- Different Background
- Different Background



- Mobile-optimized: First-class mobile experience
- Multiple themes: Choose from beautiful pre-made themes
- Pattern overlays: Optional visual flair
- Dark mode: Easy on the eyes
ð Universal API Supportâ
Works with any OpenAI-compatible API:
- OpenRouter (recommended)
- OpenAI
- Local models (Ollama, LM Studio, etc.)
- Custom endpoints
ð Privacy & Securityâ
- Self-hosted: Your data stays on your device
- No telemetry: Zero tracking or analytics
- No external dependencies: Works completely offline
- MPL 2.0 licensed: Open source and transparent
Comparison with Alternativesâ
| Feature | Llumen | ChatGPT | Open WebUI |
|---|---|---|---|
| Privacy | â Self-hosted | â Cloud | â Self-hosted |
| Setup Time | ⥠30 seconds | ⥠Instant | ð Hours |
| Resource Usage | ðŠķ <128MB | âïļ N/A | ðŠ >512MB |
| Web Search | â Built-in | â Built-in | â Manual |
| Deep Research | â Agents | â Powerful | â No |
| Mobile UX | â Excellent | â Good | â Poor |
| Binary Size | ðŠķ 17MB | âïļ N/A | ðĶ Container |


