Like the how many r’s in strawberry. It took off as an Internet meme and was fixed, but how did that fix happen?
Sadly there is no answer for you available because many of the processes around this are hidden.
I can only chime in from my own amateur experiments and there are answer is a clear “depends”. Most adjustments are made either via additional training data. This simply means that you take more data and feed it indi an already trained LLM. The result is again an LLM black box with all its stochastic magic.
The other big way are system prompts. Those are simply instructions that already get interpreted as a part of te request and provide limitations.
These can get white fancy by now, in the sense of “when the following query asks you to count something run this python script with whatever you’re supposed to count as input, the result will be a json that you can take then and do XYZ with it.”
Or more simple: you tell the model to use other programs and how to use them.
For both approaches I don’t need to maintain list: For the first one I have no way of knowing what it’s doing in detail and I just need to keep the documents themselves.
For the second one it’s literally a human readable text.
null@piefed.au 3 hours ago
I don't know the answer and I don't know anything about how LLMs are tuned but I think the answer is probably partially yes.
My supposition is:
Instead of providing manual answers to specific questions, you modify the bot's approach to answering different types of questions.
For example, if you ask "what color are bananas" the bot answers this by looking for discussions about the color of different fruits and selects the word that seems to be provided most often.
Alternatively, if you ask "what is two plus two", when the bot parses the question it recognises that it's a math question, so instead of looking for text discussions of math, it converts it to an equation and returns the solution.
Previously, I guess bots were answering the "how many r's" question in the text based kind of way, and the fix made the bot interpret it in a more mechanical / mathematic kind of way.
It's a pretty salient demonstration of a bot's inability to reason. They're good at making sentences, but they can only emulate reasoning.
otter@lemmy.ca 3 hours ago
That would be the good way of doing this, but I remember right after the
strawberry
issue was fixed it would still mess up similar queries. They might have hard-coded something in for that one, at least initially