Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?

Drummyralf@lemmy.world ⁨1⁩ ⁨month⁩ ago

After reading some of the comments and pondering this question myself, I think I may have thought of a good analogy that atleast helps me (even though I know fairly well how LLM’s work)

An LLM is like a car on the road. It can follow all the rules, like breaking in front of a red light, turning, signaling etc. However, a car has NO understanding of any of the traffic rules it follows.

A car can even break those rules, even if its behaviour is intended (if you push the gas pedal at a red light, the car is not in the wrong because it doesn’t KNOW the rules, it just acts on it).

Why this works for me is that when I give examples of human behaviour or animal behaviour, I automatically ascribe some sort of consciousness. An LLM has no conscious. This idea is exactly what I want to convey. If I think of a car and rules, it is obvious to me that a car has no concwpt of rules, but still is part of those rules somehow.

source
Sort:hotnewtop