I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT’s responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as “enshittification”.
If anyone is interested in how I configured ChatGPT to be more rational (removing the enshittification), I can post the details here. (I found the instructions elsewhere, so that part is not from me.) For now, I prefer to focus on raising awareness of the issue.
db0@lemmy.dbzer0.com 6 hours ago
There’s no point asking it factual questions like these. It doesn’t understand them.
Scrollone@feddit.it 5 hours ago
Better: it understands the question, but he doesn’t have any useful statistical data to use to reply to you.
Eggyhead@lemmings.world 5 hours ago
No it doesn’t understand the question. It collects a series of letters and words that are strung together in a particular order because that’s what you typed, then it sifts through a mass of collected data and to find the most common or likely string of letters and words that follow and spits them out.
db0@lemmy.dbzer0.com 5 hours ago
No, it literally doesn’t understand the question. It just writes what it statistically expects would follow the words in the the sentence expressing the question.