First you need to get a program that reads and runs the models. If you are an absolute newbie who doesn’t understand anything technical your best bet is llamafiles. They are extremely simple to run just download and follow the quickstart guide to start it like a application
They recommend llava model you can choose from several prepackaged ones. I like mistral models.
Then once you get into it and start wanting to run things more optimized and offloaded on a GPU you can spend a day trying to setup kobold.cpp.
My primary desktop has a typical gaming GPU from 4 yrs ago
Primary fuck around box is an old Dell w onboard GPU running proxmox
NAS has like no gpu
Also have a mini PC running HAOS
And a couple of unused pi’s
Can I do anything with any of that?
I have dabbled with running llms locally, I’d absoluteley love to but for some reason amd dropped support for my GPU in their ROCm drivers, which are needed for using my GPU for ai on Linux.
When I tried it fell back to using my cpu and I could only use small models because of the low vram of my RX 590 😔
Smokeydope@lemmy.world 1 day ago
Local LLM gang represent
Image
Image
blazeknave@lemmy.world 1 day ago
How do I start?
Smokeydope@lemmy.world 23 hours ago
First you need to get a program that reads and runs the models. If you are an absolute newbie who doesn’t understand anything technical your best bet is llamafiles. They are extremely simple to run just download and follow the quickstart guide to start it like a application They recommend llava model you can choose from several prepackaged ones. I like mistral models.
Then once you get into it and start wanting to run things more optimized and offloaded on a GPU you can spend a day trying to setup kobold.cpp.
blazeknave@lemmy.world 23 hours ago
My primary desktop has a typical gaming GPU from 4 yrs ago Primary fuck around box is an old Dell w onboard GPU running proxmox NAS has like no gpu Also have a mini PC running HAOS And a couple of unused pi’s Can I do anything with any of that?
Sharp312@lemmy.one 21 hours ago
I have dabbled with running llms locally, I’d absoluteley love to but for some reason amd dropped support for my GPU in their ROCm drivers, which are needed for using my GPU for ai on Linux.
When I tried it fell back to using my cpu and I could only use small models because of the low vram of my RX 590 😔
Smokeydope@lemmy.world 21 hours ago
I had luck with using vulcan in kobold.cpp as a substitute for rocm with my amd card.