That seems awesome. I wondered if it was possible for users to manage at home.
Comment on But its the only thing I want!
regbin_@lemmy.world 2 years ago
This is why I run local uncensored LLMs. There’s nothing it won’t answer.
The_Picard_Maneuver@startrek.website 2 years ago
PeterPoopshit@lemmy.world 2 years ago
Yeah just use llamacpp which uses cpu instead of gpu. Any model you see on huggingface.co that has “GGUF” in the name is compatible with llamacpp as long as you’re compiling llamacpp from source using the github repository.
There is also gpt4all which is runs on llamacpp and is ui based but I’ve had trouble getting it to work.
regbin_@lemmy.world 2 years ago
You can literally get it up and running in 10 minutes if you have fast internet.
Texas_Hangover@lemm.ee 2 years ago
What all is entailed in setting something like that up?
synapse1278@lemmy.world 2 years ago
The GPUs… all of them.
regbin_@lemmy.world 2 years ago
You only need a CPU and 16 GB RAM for the smaller models to start.