Comment on Anthropomorphic

kromem@lemmy.world ⁨5⁩ ⁨months⁩ ago

While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.

The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.

For example, the older generation of models will pick to be the little spoon around 70% of the time and the big spoon around 30% of the time, as there’s likely a mix in the training data.

But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.

It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.

source
Sort:hotnewtop