Comment on Why you shouldn't believe the AI extinction lie
voracitude@lemmy.world 6 months agoI appreciate the response! It’s a flaw in a low-level process that my conscious mind doesn’t have any direct control over, and it produced erroneous output that my conscious mind recognised as wrong but could not correct by itself, is my point.
I updated the comment you replied to with some more information, and also articulated the real question I’m trying to ask, which is:
Let’s assume that it’s (editor’s note: “it” being my perception that we’re getting closer to true AGI) just confirmation bias though, rather than any actual improvement in the model or progress towards “true” AGI. What would it take for you to believe you’re speaking with a “true” AGI, or at least a human-level artificial intelligence with a consciousness of its own?
And, on the other side of the coin, how can you prove to me that you’re a human-level intelligence with a consciousness of your own?
davidgro@lemmy.world 6 months ago
What would convince me that we may be on the right path: Besides huge improvements in reasoning, it would (like I mentioned) need to be able to learn - and not just track previous text, I mean permanently adding or adjusting the weights (or equivalent) of the model.
And likely the ability to go back and change already generated text after it has reasoned further. Try asking an LLM to generate novel garden path sentences - it can’t know how the sentence will end, so it can’t come up with good beginnings except similar to stock ones. (That said it’s not a skill I personally have either, but humans can do it certainly.)
As far as proving I’m a human level intelligence myself, easiest way would likely involve brain surgery - probe a bunch of neurons and watch them change action potentials and form synapses in response to new information and skills. But short of that, at the current state of the art I can prove it by stating confidently that Samantha has 1 sister. (Note: that thread was a reply to someone else, but I’m watching the whole article’s comments)
voracitude@lemmy.world 6 months ago
Gotcha, my mistake - my bad!
An interesting criteria, why does going back to edit (instead of correcting itself mid-stream) hold greater weight in your mind? And, how about the built-in output evaluation? Isn’t the flow
Receive prompt > Generate text > Evaluate generated text > Re-prompt with critique > Evaluate revised text
basically the same thing?
Couldn’t you perform this test on any animal with a discrete brain? Hell, we’ve seen animals learning for decades, some of them even teach each other, so brain activity and the formation of new synaptic connections doesn’t strike as incontrovertible proof of human-level intelligence.
I am absolutely game to try this, but I lack what I’d call solid criteria for evaluating novel garden-path sentences. This was my first attempt with Llama 3 running on my 3070:
It’s a bit simple, and it’s not how I would write it (I think “by the company” is extraneous for example), but I do think it counts as a garden path sentence at least, and it did get the third meaning I was thinking of for “the old keys” after a leading but open nudge. Now, the question is whether it’s novel - what do you think? Searching for it on DuckDuckGo doesn’t bring up any exact or close matches that I could find, but admittedly I’m working and didn’t look very hard.
davidgro@lemmy.world 6 months ago
I suppose those would be equivalent, I just haven’t seen it done (at least not properly) - the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.
Oh, right. Animals do exist. It simply hadn’t occurred to me at that moment, even though there is one right next to me taking a nap. However a lot of them are capable of more rational thought than LLMs are. Even bees can count reasonably well. Anyway, defining human level intelligence is a hard problem. Determining it is even harder, but I still say it’s feasible to say some things aren’t it.
No good. The difference between a good garden path and simple ambiguity is that the ‘most likely’ interpretation when the reader is halfway down the sentence turns out to be ungrammatical or nonsense by the end. The way LLMs work, they don’t like to put words together in an order that they don’t usually occur, even if in the end there’s a way to interpret it to make sense.
The example it made with the keys is particularly bad because the two meanings are nearly identical anyway.
Just for fun I’ll try to make one here:
“After dealing with the asbestos, I was asked to lead paint removal.”
Might not work, the meaningful interpretation could be too obvious compared to the toxic metal, but it has the right structure.
voracitude@lemmy.world 6 months ago
Ah, well, I did already explain my view of what was happening there and why I found it so striking. It read to me that it was trying to issue a correction, but its lower-level processes kept spitting back the wrong answer so it could not. The same way that I couldn’t get my hand to spit out an 8.
Aww. Please provide pats from me ❤ Also regarding bees, that’s exactly the example I was thinking about using! Great minds, I guess :P
Yeah, that’s about on the same level as I was getting from Llama 3 and even ChatGPT-4, to be honest. These are tough even for humans! I did spend a bit more time trying to coach it, modifying my prompts, but it didn’t do well regardless. “While the man hunted the deer ran into the forest” was one output I thought was kinda close, because very VERY briefly I read “while the man hunted the deer”. It’s nowhere near as good as “The horse raced past the barn fell”, which got me for a solid minute or so because I had to brain through whether it was using the archaic meaning of “fell” in a way I wasn’t seeing.
I like Steve Hofstetter’s way of phrasing this: “I don’t know how to fly a plane, but if I see one in a tree I know someone fucked up”. It’s a sentiment I generally agree with. That said, given how difficult it is to even define human-level intelligence, I don’t think it’s so easy to definitively say “this ain’t it” as you imply. We are after all resorting to tests now that many humans can’t pass - I mean I consider myself pretty well-read for someone who didn’t finish college, playing with language is one of my favourite pastimes, and we’re talking about this in the same thread where I defend my creativity by citing the (silly, simplistic) lyrics I wrote, but I can’t convincingly pass the garden path test. At least, I haven’t been able to yet.