Comment on Enshittification of ChatGPT
hendrik@palaver.p3x.de 1 week ago
I'd have to agree: Don't ask ChatGPT why it has changed it's tone. It's almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.
But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it'd always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it'd be critical of my mails and say I can't be blunt but have to phrase my mail in a nicer way...
So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don't like the sometimes patrronizing tone, and now they're going for something like "Her". Idk.
esaru@beehaw.org 1 week ago
I agree that the change in tone is only a slight improvement. The content is mostly the same. The way information is presented does affect how it is perceived though. If the content is buried under a pile of praise and nice-worded sentences, even though the content is negative, it is more likely I’ll misunderstand or take some advice less serious, so not to the degree as it was meant to be, just to let me as a user feel comfortable. If an AI is too positive in its expression just to make me as a user prefer it over another AI, even though it would be better to tell me the facts straight forward, it’s only for the benefit of OpenAI (as in this case), and not for the user. I gotta say that is what Grok is better at, it feels more direct and not talking around the facts, it gives clearer statements despite its wordiness. It’s the old story of “letting feel somenone good” versus “being good, even when it hurts”, by being more direct when it needs to be to get the message across. The content might be the same, but how it is taken by the listener and what he will do with it also depends on how it is presented.
I appreciate your comment that corrects the impression of the tone being the only or most important part, highlighting the content will mostly be the same. Just adding to it that the tone of the message also has an influence that is not to be underestimated.
hendrik@palaver.p3x.de 1 week ago
Yeah you're right. I didn't want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more. But the tone is different. I found deep withing, it has the same bias towards positivity, though. In my opinion it's just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.
I think there is two sides to the coin. The AI is the same. Regardless, it'll tell you like 50% to 99% correct answers and lie to you the other times, since it's only an AI. If you make it more appeasing to you, you're more likely to believe both the correct things it generates, but also the lies. It really depends on what you're doing if this is a good or a bad thing. It's argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won't switch off their brain. But this is a fundamental limitation of today's AI. It can do both fact and fiction. And it'll blur the lines. But in order to use it, you can't simultaneously hate reading it's output. I also like that we can change the character. I'm just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts.