As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.
Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.
Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”
lvxferre@mander.xyz 5 months ago
For writers, that “no AI” is not just the equivalent of “100% organic”; it’s also the equivalent as saying “we don’t let the village idiot to write our texts when he’s drunk”.
Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.
Zaktor@sopuli.xyz 5 months ago
Sometimes. Sometimes it’s more accurate than anyone in the village. And it’ll be reliably getting better. People relying on “AI is wrong sometimes” as the core plank of opposition aren’t going to have a lot of runway before it’s so much less error prone than people the complaint is irrelevant.
The jobs and the plagiarism aspects are real and damaging and won’t be solved with innovation. The “AI is dumb” is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.
lvxferre@mander.xyz 5 months ago
So does the village idiot. Or a tarot player. Or a coin toss. And you’d still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.
You’re distorting the discussion from “now” to “the future”, and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.
Ilandar@aussie.zone 5 months ago
Yes, I always get the feeling that a lot of these militant AI sceptics are pretty clueless about where the technology is and the rate at which it is improving. They really owe it to themselves to learn as much as they can so they can better understand where the technology is heading and what the best form of opposition will be in the future. As you say, relying on “haha Google made a funny” isn’t going to cut it forever.
CanadaPlus@lemmy.sdf.org 5 months ago
Occasionally. If you aren’t even proofreading it that’s dumb, but it can do a lot of heavy lifting in collaboration with a real worker.
For coders, there’s actually hard data on that. You’re worth about a coder and a half using CoPilot or similar.