Sterile_Technique@lemmy.world 10 hours ago
The bullshit generators we call ‘AI’ don’t assume, and aren’t frantic: they just regurgitate an output based on as much bullshit input as we can stuff into them.
The output can be more or less recognizable as bullshit, but the computer doesn’t distinguish between the two.
lvxferre@mander.xyz 9 hours ago
Yup, pretty much. And the field is full of red herring terms, so they can mislead you into believing otherwise: “hallucination”, “semantic” supplementation, “reasoning” models, large “language” model…
BradleyUffner@lemmy.world 2 hours ago
Those “reasoning models” are my favorite. It’s basically the equivalent of adding another pass through the generator with the additional prompt “now sprinkle in some text that makes it look like you are thinking about each part of your answer”.
bear@lemmy.blahaj.zone 7 hours ago
I’m going to be very disappointed if Elon’s AI wins.
lvxferre@mander.xyz 5 hours ago
Do you want my guess? The current “fight” will go on, until the AI bubble bursts. None of the current large token models will survive; they’ll be simply ditched as “unprofitable”. Instead you’ll see a bunch of smaller models popping up, for more focused tasks, being advertised as something else than AI (perhaps as a “neural network solution” or similar).
So Grok, Gemini, GPT, they’re all going the way of the dodo.
That’s just my guess though. It could be wrong.
snooggums@piefed.world 3 hours ago
Small focused learning models and other forms of AI have been used for decades.
The current bubble is just trying to make LLMs do literally everything including accurately answering questions despite their core design including randomization to appear more like a human.