Comment on Do LLM modelers maintain a list of manual corrections fed by humans?
null@piefed.au 11 hours ago
I don't know the answer and I don't know anything about how LLMs are tuned but I think the answer is probably partially yes.
My supposition is:
Instead of providing manual answers to specific questions, you modify the bot's approach to answering different types of questions.
For example, if you ask "what color are bananas" the bot answers this by looking for discussions about the color of different fruits and selects the word that seems to be provided most often.
Alternatively, if you ask "what is two plus two", when the bot parses the question it recognises that it's a math question, so instead of looking for text discussions of math, it converts it to an equation and returns the solution.
Previously, I guess bots were answering the "how many r's" question in the text based kind of way, and the fix made the bot interpret it in a more mechanical / mathematic kind of way.
It's a pretty salient demonstration of a bot's inability to reason. They're good at making sentences, but they can only emulate reasoning.
otter@lemmy.ca 10 hours ago
That would be the good way of doing this, but I remember right after the
strawberry
issue was fixed it would still mess up similar queries. They might have hard-coded something in for that one, at least initially