You mean the problems that experts said 10+ years ago would happen are happening?
Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes
Submitted 1 month ago by Gaywallet@beehaw.org to technology@beehaw.org
Comments
miracleorange@beehaw.org 1 month ago
orca@orcas.enjoying.yachts 1 month ago
Shit in, shit out. That’s AI. You can’t guarantee a single thing it says is true, and you have to play whack-a-mole forever to get it to behave. Imagine knowing this and still investing time and money in it. We could be investing that in education and making the human experience better, but instead we’re stuck watching capitalists harness it to replace people, and shoving half-baked ideas out the door as finished products.
Look, I love tech. I’ve worked in tech for 20 years. I’ve built apps that use AI. It’s the one tech that I despise watching capitalists have control of. It’s just chatbots all the way down that don’t know what they’re regurgitating, and eventually they’re going to be vacuuming up nothing but other AI content. That is going to be the future. Just bots talking to other bots. Everything completely devoid of humanity.
Malgas@beehaw.org 1 month ago
Shit in, shit out. That’s AI.
“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
—Charles Babbage, on his analytical engine, 1864
AnarchistArtificer@slrpnk.net 1 month ago
I need to add that to my quotes book, it’s great
ContrarianTrail@lemm.ee 1 month ago
Replace “AI” with “humans” and this rant is still perfectly coherent.
stembolts@programming.dev 1 month ago
Sample input from a systematically racist society, get systematically racist output.
No shit.
Gaywallet@beehaw.org 1 month ago
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
leisesprecher@feddit.org 1 month ago
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.
stembolts@programming.dev 1 month ago
True, and it upsets me because we can’t even get a baseline agreement from the masses to correct systemic inequality.
…yet, simultaneously we’re investing academic effort into correcting symptoms spawned by the problem (that many believe doesn’t exist).
To put this another way. Imagine you’re a car mechanic, someone brings you a 1980s vehicle, you diagnose that it is low on oil, and in response the customer says, “Oil isn’t real.” That’s an impasse, conversation not found, user too dumb to continue.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
I’ll be curious what they find out.
ininewcrow@lemmy.ca 1 month ago
This is thing I keep pointing out about AI
We’re like teenaged trailer trash parents who just gave birth to a genius at the trailer park where we’re all dysfunctional alcoholics and meth addicts …
… now we’re acting surprised that our genius baby talks like an idiot after listening to us for ten years.