That’s really interesting! I guess I’ll incorporate this into my worldview now.
Comment on It's a mess in here
Technus@lemmy.zip 6 months ago
I have a theory hypothesis notion that the concept of hallucination in artificial neural networks is not a failure mode that is unique to ANNs but is an inherent property of any neural network, artificial or biological.
Essentially, I posit that a neural network by itself is incapable of maintaining coherence without a rigid external framework, such as consistent feedback in training an ANN, or the laws of physics for biologicals.
This would explain why people start tripping balls in sensory deprivation chambers. And it provides a counterargument to any thought experiment or philosophy that involves a disembodied brain vividly hallucinating reality.
radix@lemm.ee 6 months ago
kautau@lemmy.world 6 months ago
Ironically not too far from Aldous Huxley’s theory about human perception:
en.wikipedia.org/wiki/Mind_at_Large
Essentially reality is or contains all the properties of hallucination but our brain filters it, and psychedelic drugs in some way dilute or remove that filter.
So the human brain is sort of by default filtering the “hallucination” version of thought until we open that up, and ANNs begin with that at baseline, and then require rigor added to them to reduce the “hallucination”
Omgarm@lemmy.world 6 months ago
You’re right, we should give all AI advancements from now on a body.
Engywuck@lemm.ee 6 months ago
Interesting comment. Thanks 🙂
muntedcrocodile@lemm.ee 6 months ago
Its either a counter argument or the best backup for a disembodied brain hallucinating everything.
MBM@lemmings.world 6 months ago
On the one hand, that’s a cool insight and I can get behind it. It’s kind of similar to deaf people talking “weird”. On the other hand, I don’t think it has anything to do with LLMs. There, hallucination is just a cool word for “it’s trained to say things that sound like they fit the context, not to be correct”
Tlaloc_Temporal@lemmy.ca 6 months ago
Is that not the same thing? If hallucinations are basically unrestricted activity, and “hallucinations” in LLM are the result of insufficient restrictions in training or prompts, then are they not real hallucinations?
NightAuthor@lemmy.world 6 months ago
Thing is, our brains could work the exact same way…. Only they’re constantly being trained, and have enough neurons that many clusters can be dedicated to very specific contexts.
Today is _____
Well, given the context that my phone says 1:20, and it’s dark, when I fell asleep it was Tuesday, and that Wednesday comes after Tuesday…. Plus all the necessary training that allowed me to understand all that context 24/hours in a day; days start at 12:00, 1 comes after 12 but only in our time of day system.