Comment on Don’t believe the hype: AGI is far from inevitable

<- View Parent
ChairmanMeow@programming.dev ⁨1⁩ ⁨month⁩ ago

What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.

This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.

But it’s easy to just define general intelligence as something approximating what humans already do.

No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.

source
Sort:hotnewtop