Comment on Anon is a PC gamer
sp3ctr4l@lemmy.dbzer0.com 1 week agoWhat do you mean by ‘local AI suffering’?
Did you mean to say ‘surviving’?
As in small, less capable, but still potentially useful when used in sane ways… people doing more of that?
Like, the fundamental problem with the idea of local AI dying out as a thing… is that most of the Chinese developed models are developed under a much more open souce type of paradigm.
Its not 100% open source, but its way more open source than than US corpo models.
So… anybody can still download an run one of those.
I’ve had Qwen3-8B working on my Steam Deck for around a year now. Not super fast, but it does work, and… a Steam Deck is not exactly a juggernaut of GPU compute power.
Anybody with a modern laptop could figure it out.
Quexotic@sh.itjust.works 2 days ago
I’ve tried a number of local models and even the 8b models aren’t that good. Unless there’s some insane breakthrough, much better hardware will be required to get the kind of results that would be timely enough or high quality enough to be useful.
So it might drive the kind of performance enhancement that will be needed to truly democratize and make the technology accessible, but until then more performance is needed.
My 2024 laptop has basically increased $800 or so in price because of the buy ups. This will either drive optimization or kill progress or maybe some of each on a continuum.
<foil hat time>I also firmly believe that part of the storage and ram buy-up was intended to make higher end compute further out of reach of us plebs, forcing us further into the “everything as a service" model and that corporate AI is a big bet that they can lay off even more people </foil hat time>
That said, if you found good results with qwen, 8b, dm me a link for the specific model, I’d love to try it. I’m still a hobbyist. 😁
sp3ctr4l@lemmy.dbzer0.com 2 days ago
I use the Alpaca flatpak, it just lets you download a variety of models, manages them all inside a contained local environment.
Even has some tools support that is expanding, basic web searches, speech to text, text to speech… and if you can find a GGUF format model, supposedly Alpaca can run this manually, and there’s a good deal on huggingface.
github.com/Jeffser/Alpaca
Unfortunately, if you’re running Windows, I… have no clue how to set up an LLM there.
Also your tin foil hat thing isn’t even tin foil hat.
Like, various people in the AI space have outright stated that they want to see a paradigm where everyone just rents compute time from them because PCs are othereise too expensive, while acting like it just happens to be the new reality that everything is so expensive, for some reason.
Nvidia went from gaming GPUs being about 50% of its business to something more like 5%, in about 5 years.
Fortunatelt the AI bubble will be popping soon, as … everyone has run out of money to lend.
Unfrotunately this will destroy the economies of the West.
Yay capitalism!
Quexotic@sh.itjust.works 1 day ago
For the moment, I haven’t had the motivation to switch everything over to Linux, but it is coming down the line. To that end, I do know how to set up models and windows, and it’s not all that hard, but what is the specific model name? Is it just the Quen 8b?
Come to think of it, I might actually be able to install the flat pack into the Windows subsystem for Linux if it behaves the way I think it’s supposed to.
Could be a very interesting experiment.
sp3ctr4l@lemmy.dbzer0.com 1 day ago
Well, if you’re coming from a Windows packground, a flatpak is roughly, to the user at least, similar to an exe.
You download a flatpak, install it, blingo blango it has its own environment that is essentially sandboxed, as it pulls in its own dependencies and such.
But, you’ll need to either go with a linux distro that comes with flatpak support pre-configured, or, set up flatpak support on a different distro.
Once you’ve got either of those, there are free app ‘stores’ for flatpak that make it extremely simple to browse, download, install a flatpak program.
Then you just click, download Alpaca, run it, and its got a menu, add new models, search through what it has access to, “Qwen 3”, 8b parameter variant, download, then use it.
I am personally using Bazzite at the moment, I used to use a bunch of Debian, variants of Debian (Ubuntu, PopOS), have futzed around with Arch and even Void… Bazzite is so far the happy medium I’ve found between stability, extensibility, and also being pretty close to cutting edge in terms of driver updates and kernel updates.
If you wanna try WSL (which is named backwards, but whatever), I… I have no idea what you’d have to do to get flatpaks working… on… Windows… but if you think you can, best of luck!