Do humans also go insane when you ask them if there is a seahorse emoji? If so, I have a fun idea for one of those prank videos.
Evidence That Humans Now Speak in a Chatbot-Influenced Dialect Is Getting Stronger
Submitted 2 days ago by chobeat@lemmy.ml to technology@beehaw.org
https://gizmodo.com/chatbot-dialect-2000696509
lvxferre@mander.xyz 2 days ago
I don’t see a big deal given
What I am concerned however is that those chatbots babble a bloody lot. And people might be willing to accept babble a bit more, due to exposure lowering their standards. And they kind of give up looking for meaning on what others say.
Powderhorn@beehaw.org 1 day ago
Under certain circumstances. How you say things in work and personal settings such as dating can absolutely affect outcomes.
TehPers@beehaw.org 1 day ago
The number of times I’ve been attacked over tone growing up tells me that I either had abusive parents,
orand that how you say stuff matters a lot. Intonation can also turn a statement into a question or even make it sarcastic. Words come with baggage beyond their meaning, and using a word with negative connotations can turn a compliment into an insult.lvxferre@mander.xyz 1 day ago
In the specific case of clanker vocab leaking into the general population, that’s no big deal. Bots are “trained” towards a bland, unoffensive, neutral words and expressions; stuff like “indeed”, “push the boundaries of”, “delve”, “navigate the complexities of
$topic”. Mostly overly verbose discourse markers.However when speaking in general grounds you’re of course correct, since the choice of words does change the meaning. For example, a “please” within a request might not change the core meaning of the request, but it conveys “I believe to be necessary to show you respect”.
thingsiplay@beehaw.org 2 days ago
Or use an Ai to summarize it…
lvxferre@mander.xyz 2 days ago
And AI sucks at that. If you interpret its output as a human-made summary, it shows everything you shouldn’t do — such as conflating what’s written with its assumptions over what’s written, or missing the core of the text for the sake of random excerpts (that might imply the opposite of what the author wrote).
But, more importantly: people are getting used to babble, that what others say has no meaning. They will not throw it into an AI to summarise it, and when they do it, they won’t understand the AI output.