Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?

<- View Parent
1rre@discuss.tchncs.de ⁨6⁩ ⁨months⁩ ago

Thing is a conscience (and any emotions, and feelings in general) is just chemicals affecting electrical signals in the brain… If a ML model such as an LLM uses parameters to affect electrical signals through its nodes then is it on us to say it can’t have a conscience, or feel happy or sad, or even pain?

Sure the inputs and outputs are different, but when you have “real” inputs it’s possible that the training data for “weather = rain” is more downbeat than “weather = sun” so is it reasonable to say that the model gets depressed when it’s raining?

source
Sort:hotnewtop