Comment on ELI5. Limit of current gen AI/LLMs
brucethemoose@lemmy.world 1 week ago
Others have explained it well; splitting calls up and programatic prompt engineering.
And what is the limit of AI non theortical limit of AI?
Who knows?
But practically, transformer models are kinda hitting an “innovation” wall. Big companies aren’t taking risks to try and fix (say) the necessity of temperature to literally randomize outputs, or splitting instructions/context/output, or self correction (like an undo token), self learning, anything. All this has been explored in papers, but they aren’t even trying it.
Their development is way more conservative than you’d think, and that’s the wall LLMs are smacking into.
vaderaj@lemmy.world 1 week ago
That’s been my issue, ie somewhere I know all this LLM lead AI is a bubble. But the corporates either increase the context window or release something that does better parallel subjobs after 3 months, and now all of a sudden this LLM lead AI is the “future” and it can perform “agentic” tasks.
It kinda makes it impossible to make people (friends who are developers, colleagues) look past the marketing gimmicks.
brucethemoose@lemmy.world 1 week ago
I mean, even as-is, it’s a very useful tool. Especially as the capabilities we have get exponentially cheaper.
What people don’t get is AI is about to become a race to the bottom, not to the top. It’s a utility to sift through millions of documents or run simple bots, or work assistants, or makeshift translators or whatever; you know, oldschool language modeling. And that’s really neat as the cost approaches “basically free.”