Comment on Why you shouldn't believe the AI extinction lie
davidgro@lemmy.world 6 months ago
I believe true AI might in fact be an extinction risk. Not likely, but not impossible. It would have to end up self-improving and wildly outclass us, then could be a threat.
Of course the fancy autocomplete systems we have now are in no way true AI.
voracitude@lemmy.world 6 months ago
I’m not so sure about that. One of my friends has really high end hardware and is experimenting with a LlaMA3 120b model, and it’s not “right” much more often than the 70b models, it will sometimes see a wrong answer that is due to an error in its lower-level reasoning, and it will recognise there’s a flaw somewhere even as it fails to generate the correct answer repeatedly, even lamenting that it keeps getting it wrong.
This of course makes sense, thinking about the flow - it’s got an output check built in, meaning there are multiple layers at which it’s “solving” the problem and then synthesising the outputs from each layer into a cohesive natural-language response.
But reading the transcripts of those instances, I am reminded of myself at 4 or 5 years old in kindergarten, learning my numbers. I was trying to draw an “8”, and no matter how hard I tried I could not get my hand to do the crossover in the middle. I had a page full of “0”. I remember this vividly because I was so angry and upset with myself, I could see my output was wrong and I couldn’t understand why I couldn’t get it right. Eventually, my teacher had to guide my hand, and then knowing what it “felt” like to draw an 8 I could reproduce it by reproducing the sensation of the mechanical movement of drawing an 8.
So, it seems to me those “sparks” of AGI are getting just a little brighter.
littlebluespark@lemmy.world 6 months ago
… Go on.
Oh, FFS. This isn’t Flakebook. 🤦🏽♂️
voracitude@lemmy.world 6 months ago
I suppose that’s my fault, you lot have no idea who my friends are or what they do for a living. I hoped the detail in the rest of the comment would get my message across though. The friend in question is a computer scientist and researcher whom I have previously co-founded a startup with - but I’m not going to doxx myself by providing more details so I don’t think stating that helps at all.
littlebluespark@lemmy.world 6 months ago
That tracks.
davidgro@lemmy.world 6 months ago
In your case that was a motor control issue, not a flaw in reasoning. In the LLM case it’s a pure implementation of a Chinese Room and the “book and pencils” (weights) randomly generate text that causes humans to experience textual pareidolia more often than not.
It can be useful - that book is very large, and contains a lot of residue of valid information and patterns, but the way it works is not how intelligence works (still an open question of course, but ‘not that way’ is quite clear.)
This is not to say that true AI is impossible - I believe it is possible, but it will have to be implemented differently. At the very least, it will need the ability to self-modify in real time (learning)
voracitude@lemmy.world 6 months ago
I appreciate the response! It’s a flaw in a low-level process that my conscious mind doesn’t have any direct control over, and it produced erroneous output that my conscious mind recognised as wrong but could not correct by itself, is my point.
I updated the comment you replied to with some more information, and also articulated the real question I’m trying to ask, which is:
And, on the other side of the coin, how can you prove to me that you’re a human-level intelligence with a consciousness of your own?
davidgro@lemmy.world 6 months ago
What would convince me that we may be on the right path: Besides huge improvements in reasoning, it would (like I mentioned) need to be able to learn - and not just track previous text, I mean permanently adding or adjusting the weights (or equivalent) of the model.
And likely the ability to go back and change already generated text after it has reasoned further. Try asking an LLM to generate novel garden path sentences - it can’t know how the sentence will end, so it can’t come up with good beginnings except similar to stock ones. (That said it’s not a skill I personally have either, but humans can do it certainly.)
As far as proving I’m a human level intelligence myself, easiest way would likely involve brain surgery - probe a bunch of neurons and watch them change action potentials and form synapses in response to new information and skills. But short of that, at the current state of the art I can prove it by stating confidently that Samantha has 1 sister. (Note: that thread was a reply to someone else, but I’m watching the whole article’s comments)