This is an automated archive made by the Lemmit Bot.
The original was posted on /r/opensource by /u/curvebass on 2025-10-31 12:45:02+00:00.
Built Solus last week - a voice assistant that runs 100% locally with zero cloud dependency. Speech-to-text (Whisper), LLM inference (Mistral via Ollama), and text-to-speech (Piper) all run on your machine.
Tech stack: Python + Node.js backend, Whisper for STT, Mistral 7B for responses, Piper for TTS, Text based RAG. Works on consumer GPUs (tested on GTX 1650). ~10s latency, fully functional with context memory and document Q&A.