Comment on Why you shouldn't believe the AI extinction lie
davidgro@lemmy.world 6 months agoAn interesting criteria, why does going back to edit (instead of correcting itself mid-stream)
I suppose those would be equivalent, I just haven’t seen it done (at least not properly) - the example you posted earlier with the siblings for example was showing how it could only append more text and not actually produce corrections.
Couldn’t you perform this test on any animal with a discrete brain?
Oh, right. Animals do exist. It simply hadn’t occurred to me at that moment, even though there is one right next to me taking a nap. However a lot of them are capable of more rational thought than LLMs are. Even bees can count reasonably well. Anyway, defining human level intelligence is a hard problem. Determining it is even harder, but I still say it’s feasible to say some things aren’t it.
[Garden path sentences]
No good. The difference between a good garden path and simple ambiguity is that the ‘most likely’ interpretation when the reader is halfway down the sentence turns out to be ungrammatical or nonsense by the end. The way LLMs work, they don’t like to put words together in an order that they don’t usually occur, even if in the end there’s a way to interpret it to make sense.
The example it made with the keys is particularly bad because the two meanings are nearly identical anyway.
Just for fun I’ll try to make one here:
“After dealing with the asbestos, I was asked to lead paint removal.”
Might not work, the meaningful interpretation could be too obvious compared to the toxic metal, but it has the right structure.
voracitude@lemmy.world 6 months ago
Ah, well, I did already explain my view of what was happening there and why I found it so striking. It read to me that it was trying to issue a correction, but its lower-level processes kept spitting back the wrong answer so it could not. The same way that I couldn’t get my hand to spit out an 8.
Aww. Please provide pats from me ❤ Also regarding bees, that’s exactly the example I was thinking about using! Great minds, I guess :P
Yeah, that’s about on the same level as I was getting from Llama 3 and even ChatGPT-4, to be honest. These are tough even for humans! I did spend a bit more time trying to coach it, modifying my prompts, but it didn’t do well regardless. “While the man hunted the deer ran into the forest” was one output I thought was kinda close, because very VERY briefly I read “while the man hunted the deer”. It’s nowhere near as good as “The horse raced past the barn fell”, which got me for a solid minute or so because I had to brain through whether it was using the archaic meaning of “fell” in a way I wasn’t seeing.
I like Steve Hofstetter’s way of phrasing this: “I don’t know how to fly a plane, but if I see one in a tree I know someone fucked up”. It’s a sentiment I generally agree with. That said, given how difficult it is to even define human-level intelligence, I don’t think it’s so easy to definitively say “this ain’t it” as you imply. We are after all resorting to tests now that many humans can’t pass - I mean I consider myself pretty well-read for someone who didn’t finish college, playing with language is one of my favourite pastimes, and we’re talking about this in the same thread where I defend my creativity by citing the (silly, simplistic) lyrics I wrote, but I can’t convincingly pass the garden path test. At least, I haven’t been able to yet.
davidgro@lemmy.world 6 months ago
I gotta go for now, but one quick note:
Actually looked too good to be an original creation from an LLM to me, and sure enough it’s not. (About half way down)
I was actually looking up the one about the horse when I found that page.
voracitude@lemmy.world 6 months ago
Oh you’re kidding! Haha well it tried.
I appreciate the discussion, this was nice. Catch ya around!