Lixen
@Lixen@beehaw.org
- Comment on [deleted] 6 months ago:
Ascribing reasoning and thinking to an LLM starts to become a semantic discussion. Hallucinations are a consequence of parametrizing a model in a way to allow more freedom and introducing more randomness, but deep down, the results still come from a statistical derivation.
The vastness of the data makes the whole system just a big blackbox, impossible for anyone to really grasp, so of course it is nearly impossible for us to explain in detail all behaviors and show data to backup our hypotheses. That still doesn’t mean there’s any real logic or thinking going on.
But again, it is difficult to really discuss the topic without clear semantics that define what we mean when saying “thinking”. Your definition might differ from mine in a way that will never make us agree on the subject.
- Comment on [deleted] 6 months ago:
Besides, the AI doesn’t “think” about manipulating, it just does what it’s programming/training tells it to do, no ?
Not even that, it is a statistical model based on it’s dataset. If the dataset happens to contain mostly “visual impairment” as a reason for needing help to bypassing captchas (not really unexpected, because the dataset is actual text written by humans), and thus that’s the most likely route it takes. GPT indeed has no notion of concepts like a goal and using deception.