Most of us have no use for quantum computers. That’s a government/research thing. I have no idea what the next disruptive technology will be. They are working hard on AGI, which has the potential to be genuinely disruptive and world changing, but LLMs are not the path to get there and I have no idea whether they are anywhere close to achieving it.
Comment on AI hallucinations are getting worse – and they're here to stay
msage@programming.dev 2 days agoWhat is your definition of reasoning?
It’s not shoving AI slop into it again to get a new AI slop? Until it stops, because it reached the point where it’s just done?
What ancient wizzardry do you use for your reasoning at home if not that?
But like look, we’ve had shit like this since forever, it’s increasingly obvious that most people will cheer for anything, so the new ideas just get bigger and bigger. Can’t wait for the replacement, I dare not even think about what’s next. But for the love of fuck, don’t let it be quantums. Please, I beg the world.
MagicShel@lemmy.zip 2 days ago
msage@programming.dev 2 days ago
Surprise surprise, most of us have no use for LLMs.
And yet everyone and their gradma is using it for everything.
People asked GPT who would the next pope be.
Or which car to buy.
Or what’s a good local salary.
I’m so fucking tired of all the shit.
lvxferre@mander.xyz 2 days ago
Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!
…what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.
msage@programming.dev 2 days ago
It’s a bit fucking expensive for a grammar tool.
I get that it gets logarithmically more expensive for every last bit of grammar, and some languages have very ridiculous nonsensical rules.
But I wish it had some broader use, that would justify its cost.
lvxferre@mander.xyz 2 days ago
Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.
Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.
I hope the opposite - that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.
msage@programming.dev 2 days ago
But that won’t happen, since the bubble rose on promises of gorillions of returns, and those have not manifested yet.
We are so fucking stupid, I hate this timeline.