Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?
1rre@discuss.tchncs.de 6 months agoThing is a conscience (and any emotions, and feelings in general) is just chemicals affecting electrical signals in the brain… If a ML model such as an LLM uses parameters to affect electrical signals through its nodes then is it on us to say it can’t have a conscience, or feel happy or sad, or even pain?
Sure the inputs and outputs are different, but when you have “real” inputs it’s possible that the training data for “weather = rain” is more downbeat than “weather = sun” so is it reasonable to say that the model gets depressed when it’s raining?
Drummyralf@lemmy.world 6 months ago
Doesn’t that depend on your view of consciousness and if you hold the view of naturalism?
I thought science is starting to find more and more that a 100% naturalistic worldview is hard to keep up.
I guess my initial question is almost more philosophical in nature and less deterministic.
huginn@feddit.it 6 months ago
I’m not positive I’m understanding your term naturalistic but no neuroscientist would say “we are just neurons”. Similarly no neuroscientist would deny that neurons are a fundamental part of consciousness and thought.
You have plenty of complex chemical processes interacting with your brain constantly - the neurons there aren’t all of who you are.
But without the neurons there: you aren’t anyone anymore. You cease to live. Destroying some of those neurons will change you fundamentally.
There’s no disputing this.
Drummyralf@lemmy.world 6 months ago
I agree with you, and you worded what I was clumsily trying to say. Thank you:)
With naturalism I mean the philosphical idea that only natural laws and forces are present in this world. Or as an extension, the idea that here is only matter.