This is exactly what I was thinking. They aren’t programmed to follow the user’s instructions to begin with. Why is it a surprise when they deviate from them?
It’s a fundamental misunderstanding of the ML that goes into these LLMs. They are prediction machines. They might have “specialist” submodels or whatever that are better at predicting specific areas, but that’s about it.
XLE@piefed.social 1 day ago
I hope that goes without saying, but you’re correct. The humanizing language about AI in this article (freaking “schemes”?!) is completely cribbed from the companies making the positive misleading statements about it. Bit disappointing to see The Guardian falling for it.
In addition to the humanization, it implies the chatbot is getting better at doing things and not worse.