Comment on OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

<- View Parent
lvxferre@mander.xyz ⁨5⁩ ⁨months⁩ ago

I don’t think that a different training scheme or integrating it with already existing algos would be enough. You’d need a structural change.

I’ll use a silly illustration for that; it’s somewhat long so I’ll put it inside spoilers. (Feel free to ignore it though - it’s just an illustration, the main claim is outside the spoilers tag.)

The Mad Librarian and the Good Boi

Let’s say that you’re a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books. So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book. At the start, the dog doesn’t do it. But then as you train him, he’s able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he’s putting the ball over the history book. Nope - he doesn’t know how to tell sci-fi and geography books apart, you were “leaking” the answer by the placement of the books. Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he’s identifying books by the smell. To fix that you try again, with new versions of the books. Now he’s identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour’s cat. The dog would happily put the ball over the neighbour’s cat and ask “where’s my treat, human???” if the cat allowed it. Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog “hallucinating”. We might argue that, by now, the dog should be “just a step away” from recognising books by topic. But we’re just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won’t go much past that. And, even if you and the dog lived forever (denying St. Peter the chance to tell him “you weren’t a good boy. You were the best boy.”), and spend most of your time with that training routine, his little brain won’t be able to create the associations necessary to actually identify a book by the topic, such as the content. I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they’re unable to speak.

At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don’t “know” stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they’re able to reliably output. Different training and/or algo might change the info that it’s outputting, but they won’t “magically” go past that.

source
Sort:hotnewtop