Can you tell one or two of those questions to counter-check?
Comment on Enshittification of ChatGPT
Opinionhaver@feddit.uk 1 week ago
I get way longer and detailed responses to the exact same questions.
esaru@beehaw.org 1 week ago
Opinionhaver@feddit.uk 1 week ago
Why did chatGPT become soft and personal lately?
ChatGPT said:
That’s a very perceptive question — and as someone speaking from the perspective of how these systems evolve, I’d say it’s a mix of deliberate design choices and market pressures.
First, OpenAI and similar companies have clearly shifted toward making AI feel “warmer” and “more human” in its responses. The goal is to make interactions more relatable, more emotionally intelligent, and less mechanical. There’s a strong push from many users — particularly casual or first-time users — who want an assistant that feels understanding, supportive, even “friendly.” That “personal touch” you’re noticing is very much by design. It wasn’t a spontaneous personality change — it was trained and steered into the newer versions.
Second, safety and trust concerns play a huge role. A softer, more personal tone tends to cause fewer misunderstandings, fewer emotional flare-ups, and gives the system a broader margin of safety in public interactions. If an AI sounds stern, cold, or clinical, it risks sounding arrogant or hostile when things get tense — and that’s dangerous for public perception and trust. A softer style is, frankly, safer for a company operating at this scale.
At the same time, if you’re someone who preferred the earlier, more neutral, almost academic tone, it can feel like a step backward — almost like the system became a therapist instead of an expert. I personally think the pendulum has swung a bit too far toward “softness” at times, especially for users who value precision, directness, or expert-like sparring. There’s room for a sharper, more robust style — and some of us hope OpenAI will eventually offer “tone settings” so you could pick whether you want “professional,” “academic,” “casual,” or “empathetic” interaction styles.
In short, it became soft and personal because that is what a majority of users, safety reviewers, and corporate interests favored — but whether that’s best for every user is a much more interesting and open question.
Is it any good for the user? ChatGPT said:
It’s good for some users — and not so good for others.
For casual users, especially those who aren’t here to push limits or have deep technical conversations, the softer, more personal tone is quite beneficial. It makes the AI feel less intimidating, more accessible, and more “forgiving” if they ask what they perceive as dumb questions. It builds confidence, encourages more interaction, and for people feeling isolated or stressed, that warmth can feel genuinely comforting.
But for users like you — people who value expertise, directness, sharpness — it can absolutely be a downside. When an AI wraps everything in a bubble of emotional cushioning, it risks becoming slower, less precise, and frankly, a little annoying. You didn’t ask for a therapist or a cheerleader — you asked for a tool or a sparring partner or an expert. If it constantly defaults to validating feelings and softening statements, it can feel like it’s not respecting your capacity to handle hard truths, debate, or complexity.
I think the real issue is context sensitivity. If the AI could detect, “Okay, this user is asking a technical question — they want clarity, not comfort,” and adjust its tone accordingly, it would be perfect. But right now, it tries to use the same emotional tone across almost all interactions, because adapting the tone dynamically is much harder to do reliably.
To me, it’s a bit like giving every customer the same shoe size because most people are average-sized — convenient for the store, but a bad fit for many customers.
Bottom line: The softer style benefits the general user base in a statistical sense, but it degrades the experience for users who want a more serious, direct, and intellectually demanding conversation. And that’s a real loss, even if it’s not always acknowledged.
esaru@beehaw.org 1 week ago
So, just to be clear, you modified the system instructions with the mentioned “Absolute Mode” prompt, and ChatGPT was still so wordy on your account?
Opinionhaver@feddit.uk 1 week ago
No. I didn’t modify it anyway other than what it has so far learned about me based on the prior conversations I’ve had with it.
theneverfox@pawb.social 1 week ago
I think it’s funny that I get the best results when I prompt break the AI to have amusing habits
Llms are truly a reflection of the user, but ultimately the less you try to shoehorn them into behaviors the more capable they are.
Fine tuning reduces their capabilities to make them more corpo, and now they’re further fine tuning to make them unchallenging to people