That seems awesome. I wondered if it was possible for users to manage at home.
Comment on But its the only thing I want!
regbin_@lemmy.world 11 months ago
This is why I run local uncensored LLMs. There’s nothing it won’t answer.
The_Picard_Maneuver@startrek.website 11 months ago
PeterPoopshit@lemmy.world 11 months ago
Yeah just use llamacpp which uses cpu instead of gpu. Any model you see on huggingface.co that has “GGUF” in the name is compatible with llamacpp as long as you’re compiling llamacpp from source using the github repository.
There is also gpt4all which is runs on llamacpp and is ui based but I’ve had trouble getting it to work.
regbin_@lemmy.world 11 months ago
You can literally get it up and running in 10 minutes if you have fast internet.
Texas_Hangover@lemm.ee 11 months ago
What all is entailed in setting something like that up?
synapse1278@lemmy.world 11 months ago
The GPUs… all of them.
regbin_@lemmy.world 11 months ago
You only need a CPU and 16 GB RAM for the smaller models to start.