Comment on Recent conversations between Dawkins and sentient chat-bot Claudia (Claude)

<- View Parent
bbb@sh.itjust.works ⁨15⁩ ⁨hours⁩ ago

It’s interesting that you point to en.wikipedia.org/…/Hard_problem_of_consciousness when the term was coined by David Chalmers, who published Could a Large Language Model be Conscious?. From the abstract:

I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.

So are we all just arguing about how likely it is, or are you arguing that current AI systems are definitely not conscious? If the latter, what do you think about the not-too-distant future ones?

But a neuroscientist will tell you it’s not simple at all. It’s not info in, info out.

The system is changed, biologically, by the input.

The same input given twice will result in a different output the 2nd time.

And the 3rd. And how frequently the input is given or it’s temporal relation to other stimuli will also change its output.

I thought online learning was possible with current LLMs, just not worth the cost. I mean, you can at least fine tune offline based on previous outputs and feedback, e.g. RLHF. I feel like maybe neither should count, but can’t say why exactly. Not many end users bother with fine tuning anymore because there are usually more effective alternatives like RAG.

What do you think about agentic systems, i.e. running an LLM in a loop with a scratchpad and tools? They just write their “memories” into text files, but if you consider those text files part of the system, then the input does technically change the system. Of course, you could argue that doesn’t count because it’s no different to changing the input. So to count, it would have to store neuralese or a LoRA or something?

source
Sort:hotnewtop