Comment on AI hallucinations are getting worse – and they're here to stay
lvxferre@mander.xyz 2 days agoI’d go further: you won’t reach AGI through LLM development. It’s like randomly throwing bricks on a construction site, no cement, and hoping that you’ll get a house.
I’m not even sure if AGI is cost-wise feasible with the current hardware, we’d probably need cheaper calculations per unit of energy.
stardustwager@lemm.ee 2 days ago
vintageballs@feddit.org 1 day ago
Ah yes Mr. Professor, mind telling us how you came to this conclusion?
To me you come off like an early 1900s fear monger a la “There will never be a flying machine, humans aren’t meant to be in the sky and it’s physically impossible”.
If you literally meant that there is no such thing yet, then sure, we haven’t reached AGI yet. But the rest of your sentence is very disingenuous toward the thousands of scientists and developers working on precisely these issues and also extremely ignorant of current developments.
stardustwager@lemm.ee 1 day ago
wicked@programming.dev 1 day ago
I pasted 1k line C++ file into Gemini, along with a screenshot and a trace log and asked it to find the bug. It reasoned for about 5 minutes. Extract of the solution:
It correctly identified that
sqrt(_v[0]*_v[0] + _v[1]*_v[1] + _v[2]*_v[2]);
had too low precision and usingstd::hypot(_v[0], _v[1], _v[2])
would likely solve it.If this is just autocomplete, then I agree that it’s a pretty fancy one.
ramble81@lemm.ee 1 day ago
To vintage’s point. The way I view it is there is no chance for AGI via the current method of hopped up LLM/ML but that doesn’t mean we won’t uncover a method in the future. Bio-engineering with an attempt to recreate a neural network for example, or extraction of neurons via stem cells with some sort of electrical interface. My initial point was that it’s way off, not that it’s impossible. One day someone will go “well, that’s interesting” and we’ll have a whole new paradigm
vintageballs@feddit.org 1 day ago
Funnily enough, this is also my field, though I am not at uni anymore since I now work in this area. I agree that current literature rightfully makes no claims of AGI.
Calling transformer models (also definitely not the only type of LLM that is feasible - mamba, Llada, … exist!) “fancy autocomplete” is very disingenuous in my view. Also, the current boom of AI includes way more than the flashy language models that the general population directly interacts with, as you surely know. And whether a model is able to “generalize” depends on whether you mean within its objective boundaries or outside of them, I would say.
I agree that a training objective of predicting the next token in a sequence probably won’t be enough to achieve generalized intelligence. However, modelling language is the first and most important step on that path since us humans use language to abstract and represent problems.
Looking at the current pace of development, I wouldn’t be so pessimistic, though I won’t make claims as to when we will reach AGI. While there may not be a complete theoretical framework for AGI, I believe it will be achieved in a similar way as current systems are, being developed first and explained after.
Powderhorn@beehaw.org 1 day ago
AGI is just a term used for VC and shareholders.