Comment on I wish I was as bold as these authors.

<- View Parent
ignotum@lemmy.world ⁨5⁩ ⁨days⁩ ago

If i train an LLM to do math, for the training data i generate a+b=cstatements, never showing it the same one twice.

It would be pointless for it to “memorize” every single question and answer it gets since it would never see that question again. The only way it would be able to generate correct answers would be if it gained a concept of what numbers are, and how the add operation operates on them to create a new number.
Rather than memorizing and parroting it would have to actually understand it in order to generate responses.

It’s called generalization, it’s why large amounts of data is required (if you show the same data again and again then memorizing becomes a viable strategy)

If I say “Two plus two is four” I am communicating my belief about mathematics.

Seems like a pointless distinction, you were told it so you believe it to be the case? Why can’t we say the LLM outputs what it believes is the correct answer? You’re both just making some statement based on your prior experiences which may or may not be true

source
Sort:hotnewtop