Comment on OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

<- View Parent
lvxferre@mander.xyz ⁨4⁩ ⁨months⁩ ago

Chinese room, called it. Just with a dog instead.

The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.

As such, no, my example is not the Chinese room. I’m highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?

Why this matters: in the topic of existential threat, it’s pretty much irrelevant if the AI in question “thinks” or not. What matters is its usage in situations where it would “decide” something.

I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm we’re talking about, how would you do it?

Why don’t we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from “[AI is] Not yet [an existential threat], anyway”)?


Also worth noting that you outright ignored the main claim outside spoilers tag.

source
Sort:hotnewtop