Comment on Don’t believe the hype: AGI is far from inevitable

<- View Parent
BarryZuckerkorn@beehaw.org ⁨1⁩ ⁨month⁩ ago

That’s assuming that we are a general intelligence.

But it’s easy to just define general intelligence as something approximating what humans already do. The paper itself only analyzed whether it was feasible to have a computational system that produces outputs approximately similar to humans, whatever that is.

True, they’ve only calculated it’d take perhaps millions of years.

No, you’re missing my point, at least how I read the paper. They’re saying that the method of using training data to computationally develop a neural network is a conceptual dead end. Throwing more resources at the NP-hard problem isn’t going to solve it.

What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.

source
Sort:hotnewtop