Comment on [deleted]
Lixen@beehaw.org 6 months agoBesides, the AI doesn’t “think” about manipulating, it just does what it’s programming/training tells it to do, no ?
Not even that, it is a statistical model based on it’s dataset. If the dataset happens to contain mostly “visual impairment” as a reason for needing help to bypassing captchas (not really unexpected, because the dataset is actual text written by humans), and thus that’s the most likely route it takes. GPT indeed has no notion of concepts like a goal and using deception.
NeatNit@discuss.tchncs.de 6 months ago
Reasoning and “thinking” can arise as emergent properties of this system. Not everything the model says is backed up by direct data. As you surely know, you’ve heard of AI hallucinations.
I believe the researchers in that experiment allowed the model to write out its thoughts to a separate place where only they could read them.
By god, watch the video and not the crappy AI-generated summary. This man is one of the best AI safety explainers in the world. You don’t have to agree with everything he says, but I think you’ll agree with the vast majority of it.
Lixen@beehaw.org 6 months ago
Ascribing reasoning and thinking to an LLM starts to become a semantic discussion. Hallucinations are a consequence of parametrizing a model in a way to allow more freedom and introducing more randomness, but deep down, the results still come from a statistical derivation.
The vastness of the data makes the whole system just a big blackbox, impossible for anyone to really grasp, so of course it is nearly impossible for us to explain in detail all behaviors and show data to backup our hypotheses. That still doesn’t mean there’s any real logic or thinking going on.
But again, it is difficult to really discuss the topic without clear semantics that define what we mean when saying “thinking”. Your definition might differ from mine in a way that will never make us agree on the subject.