Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude)

<- View Parent
Grail@multiverse.soulism.net ⁨20⁩ ⁨hours⁩ ago

Yeah, LLMs are prone to similar biases because they are also bad at conceptualising probability. They’re not calculators, they’re ANNs. They basically operate on dream-logic. Whatever makes sense to you in a dream, is likely to make sense to an LLM. That’s because dreams are a time when you have intelligence without consciousness, like an LLM. You’re super suggestible and you just go with whatever feels right based on your gut instincts. An LLM is a simulation of a person’s gut instincts.

source
Sort:hotnewtop