Kills the planet
Steals from artists
Widens inequality
Puts people out of work
Reinforces prejudices
Makes us stupid
Makes everything generic
Blows up the economy
Supports oligarchs
Can’t be trusted, hallucinates and lies
Overhyped
Submitted 3 weeks ago by rabiezaater@piefed.social to [deleted]
Kills the planet
Steals from artists
Widens inequality
Puts people out of work
Reinforces prejudices
Makes us stupid
Makes everything generic
Blows up the economy
Supports oligarchs
Can’t be trusted, hallucinates and lies
Overhyped
As a thought experiment I considered all of these points and here are my thoughts.
Kills the planet Got me there. That stupid datacenter crap where they need 1000tb of ram and zillion 5090RTXs and an entire nuclear power plant just to generate a chocolate chip cookie recipe needs to fucking go. Self-hosted ai isn’t that bad though. You can still argue that running a self-hosted koboldcpp on a 10 watt raspberry pi ALSO destroys the planet but so does all technology. Imagine living with no A/C, no deodorant, no running water, no toilet paper, just to make the earth livable for an additional 100 years or whatever. Fuck that. I chose to not have kids so I’m still doing my part which is more than what the majority of the population can be arsed to do.
Steals from artists I don’t really understand this argument despite being the most common anti-ai argument. What type of art is ai really capable of replacing humans on? Hentai and video game 3d model textures? It’s useless at making 3d models even to the most fanatic of ai worshippers. I can watch porn on pornhub for free and would never and have never commissioned a human artist to make porn pictures for me. Am I stealing from hentai artists by not commissioning them for their work and choosing other means of looking at boobs?
Buying textures for your hand made 3d models only supports the corporation selling them and the original artists get very little if anything at all. Using ai to circumvent spammy price gouging for 3d model textures seems like a better way to fight back to me. Another point is that copyright trolls are always harassing random youtubers over bullshit claims which DOES destroy livelihoods. Using ai to create a unique illustration that isn’t registered in a copyright strike database when you REALLY weren’t going to pay a $20 license for some spammy corporate licensed art either way really seems like a legitimate use of ai to me.
Another thing is memes even. I would 100%, absolutely, positively, never ever in a million years commission a human artist for the hundreds of dollars it usually costs to make an illustration for a meme in a shitpost I was trying to make. Yet people get out their torches and pitchforks anytime someone uses ai in a shitpost. I just don’t get it. It’s the “pirating software STEALS money from developers” argument all over again. Is it REALLY stealing if you WEREN’T going to pay for whatever it was otherwise? In 2018 the average person online was practically up in arms over how unfair copyright law is and everyone dropped it to hate ai instead. Seems a little too convenient is you ask me. I think a lot of people have been played.
Widens inequality Employers using ai to screen out the applicants that aren’t desperate enough and are therefore less likely to submit to abnormally cruel or illegal terms could be an example of this. Employers in America generally have too many freedoms in the first place. We aren’t going to get out this downward spiral of wages not keeping up with costs of living without doing some stuff that would be really unpopular to all the powerful people in charge of it all. I’m not sure that they need ai to continue colluding together to treat us all like trash. It will eventually devolve into all-out violence if no one forces them to stop ai or not.
Also facial recognition cameras, more about that further down.
Puts people out of work I don’t have any good supporting or opposing arguments for this one because I don’t know of any strong examples. Ai is 1000% shittier than a human at any given task for 0% of the cost which is enough to keep an american corporation satisfied for most purposes at least in theory.
Reinforces prejudices I’m not going to be like “provide examples or it doesnt count” because it’s lame and stupid when people do that but my best guess for this one is its talking about how ai can be used to reinforce white nationalist ideology online in bot swarms and stuff. An ai can generate pro christo-fascist propaganda just as much as it can generate pro-democracy propanganda. I wish we could harass christian nationalist type people online with ai but it seems to be only the bad guys doing it. Go on reddit and say anything positive about marijuana in any context besides “my grandma is dying of cancer and marijuana allows her to not be in pain”. You will have people telling you to grow up and stop being a piece of shit. Meanwhile, you can speak out in support of bombing poor people in the middle east and no one bats an eye. Why can’t we harass the piece of shit people with ai? I guess you got me on this one. It only is used for spreading christian nationalist ideology for some reason. But this COULD change.
Makes us stupid A few days ago I used a self hosted ai to help write a python script to run object recognition on the cctv cameras for my home network and it only took an afternoon. It would have taken longer to do this if I truly had to figure out and research every little detail and function name myself but I still could have done it. Sure there was some incorrect stuff in it but fixing that was still faster than doing it from scratch. I used the time I saved to also program a graph that shows the temperature history on my weather station. Does this mean I am stupid?
Makes everything generic 100% true. In 2014 or so, you could find anything you wanted on the internet. Now every single webpage is one big nothing-burger. Would corporate enshitification alone have brought things to this point even without ai? Maybe so, maybe not. The point stands.
Blows up the economy It definitely provides a coverup excuse for the systematic price gouging of essential microchips and computer components, sure.
Supports oligarchs This is true. Using non self-hosted ai even without paying for it does support oligarchs. Look at Grok for example. It’s a blatant fascist ideology propaganda machine. The other bots probably do the same thing but more subtle. I bet if you asked chatgpt about marijuana, transgender rights or atheism it wouldn’t be supportive of it. Yet if you asked chatgpt to run an online bot harassment campaign to tell transgender people and marijuana users how big of a piece of shit they are, there would be little pushback and it would say things of suspiciously higher quality than it was the other way around. They’d probably quietly and temporarily switch it over to the paid model for that one to make it generate higher quality hate speech without charging you for it. I’m not going to try it though.
Can’t be trusted, hallucinates and lies Sure. You can’t trust posts on the internet either. Sometimes I find it easier to do my research and differentiate between bad advice and not bad advice than it is to just start from nothing, but most of the seriously potentially useful stuff is usually banned from ai models anyway.
Overhyped & overpromised I guess. See “Puts people out of work”. 1000% worse for 0% of the cost is a no-brainer to an american corporation. To cut down on backlash they probably have to pretend replacing customer support roles with bots is “actually better”.
Can’t generate outside of its training data Some self hosted ai models are compatible with being connected to a websearch which means all the non-self hosted ones also have that. Then you have ai shifting through ai slop articles trying to guess which information is useful and which isn’t. The thought of making an ai sift through another ai bot’s poop is funny to me.
Is creating obscene surveillance state This is the objectively worst part about the advent of ai. Ai powered facial recognition allows law enforcement to have an easier time tracking down and harassing the types of people that the dominant ideology (the christian nationalists) want removed from society. The fascists established a full-on 1984 and we fuckin’ let them. For this one reason alone, I believe the world would be better of if ai were never a thing.
Used in weapons to kill Violence wasn’t invented until the first gun was invented after all. Not really. Maybe when the next american civil war happens, the good guys can have ai guided rockets or whatever too.
Made computer components expensive I already elaborated on this, but yes. Spamming ai datacenters all over the place just to prevent houses from being built there to keep costs of living high means they have to fill them with overpriced video cards. To give credit where credit is due, this isn’t all on ai. Chip companies are purposefully scaling back production so they can make more money while doing less work. Meanwhile, the government is massively cutting back on medicaid because they think we are all worthless losers who don’t work hard enough and deserve to either die in prison over unplayable medical debt or live through suffering because there is lots of suffering in the bible and republicans want to make America more like the bible. It is an unreasonably cruel, unreasonably unfair double standard.
Replaces human interaction I guess. Imagine getting swatted because you told your ai “friend” you were considering fleeing to a blue state and getting an abortion. Although religious fucknuts report their friends over this too.
Just annoying If you get on any ai and give it a prompt like: generate a sensationalist shitpost of a news article titled “Why you should sell all your possessions and work 120 hours a week at your job instead and never take vacation because you deserve to live like that”. The result is just an average modern news article.
Finally someone with a balanced view on AI. That was a rare wall of text. Worth the time though.
Again, this is a lot of hyperbole.
Is AI killing the planet, or is capitalism and addiction to fossil fuels? If AI was 100 renewable and run based on community consent, would it still be “killing the planet”?
In what way does AI “steal” in any way more significantly than an artist uses another artist for inspiration or a coder uses another open source project for their code?
How does AI widen inequality worse than it has been already, and is that solely the result of AI or is it just a product of capitalism?
I could go through the entire list, but you get the idea. A lot of the “evils” of AI are actually just symptoms of deeper systemic issues that have nothing to do with AI itself.
Thanks for your reply. Here are my rebuttals:
Is AI killing the planet, or is capitalism and addiction to fossil fuels?
Capitalism was already killing the planet, but the rush to invest in AI has demonstrably accelerated it.
If AI was 100 renewable and run based on community consent, would it still be “killing the planet”?
No. But thats not the scenario we are in.
In what way does AI “steal” in any way more significantly than an artist uses another artist for inspiration or a coder uses another open source project for their code?
Because artists are people with consciousness and feeling and the capability for novel thought. AI is not. Believing it’s doing the same thing as human thinking is being suckered by the hype.
How does AI widen inequality worse than it has been already, and is that solely the result of AI or is it just a product of capitalism?
This is a big one, but without guardrails it’s definitely poised to hurt working people and enrich the powerful, which therefore drives further inequality (which yes, was already bad as a product of modern capitalism). And those guardrails are not in place, and will not be put into place if we just follow along as they want us to.
I feel this reply is somewhat misguided and reiterate with most of the AI propaganda talking point.
“It is not AI, it is capitalism": It is really AI in capitalism, which is the reality we life in right now. If you take anything and put it in an utopia with a bunch of constraint, then of course it will be great. But that simply is not the world we are living in.
People hate cars, because it is a inefficient, polluting, and unsafe way to travel. But what if cars are all running on renewable energy, is super small, and never collide with pedestrian or cyclist.
Some people hate meat eating because it is inefficient and forces animal to live in inhuman conditions. But what if we can make animal photosynthesis and make them live happy, free, and full lives. Then no one will be against meat eating, but again, that is not our world right now.
Just because there is an alternative utopia where AI is perfect that doesn’t mean it is right now, and its flaws causes the hatred on the internet.
Now, most AI center are polluting, consume large amount of energy, and those AI that people mostly uses are built with stolen knowledge. Finally, society should optimize for the well-being of the people, and artist are people, AI are not. All the AI people use nowadays funnels the money to the richest few, while majority of the population, even AI experts, don’t have the means to train useful AI model as of now.
I loathe AI for multiple, personal reasons;
When I need to contact customer support of some sort, there is an AI bot that is no use and there are no real humans, because the AI is cheaper. I won’t get the help I need or it’s too difficult to reach.
My mother language is a bit more difficult one and many stores (especially online) are starting to translate everything with AI and that makes the text absolutely incomprehensible. Hard or even impossible to understand even the basic descriptions or the manuals.
Browsers have those forced AI-summaries when you try to look for something and those are often both wrong and impossible to turn off. (Or if it’s possible to turn off, they keep turning back on.)
People I know are literally believing everything from those summaries and such and are very confidently wrong/misunderstanding whatever basic thing. It’s very annoying. (“Let’s ask STSÄTKEEPEETEE!”)
Being parasocial online is becoming frustrating as I have been accused of being an AI bot on multiple occasions just because of the way I write in English. Knowing even basic grammar makes you a bot these days.
It’s burning the environment down, destroying the shambles of the global economy, and being constantly shoved down everyone’s throats even though it’s only impressive to people who don’t understand it
Imagine a shitty robot was just made available for free.
The shitty robot replaces you at work. It performs way faster with worse results, but the company hires a robot “expert” that fixes the results just enough that the product appears to be working. (Its not). You are now starving.
The shitty robot tells your kids that porn is a viable career path. And that they should kill themselves.
The shitty robot starts showing up everywhere, in advertising, TV shows, customer support lines, schools.
The shitty robot makes shitty art really fast, which people can sell or use how they want. Artists are now starving.
Think of how shitty and scam filled the early internet was. Did we abandon it because of how shitty it was at first, or did we develop it and tweak it to it’s full potential?
I mean, even Linus Torvalds acknowledges the benefits ffs
Think of how shitty and scam filled the early internet was. Did we abandon it because of how shitty it was at first,
It wasn’t though… That was mid to late term internet
Think of how shitty and scam filled the early internet was. Did we abandon it because of how shitty it was at first, or did we develop it and tweak it to it’s full potential?
I have been on the internet since 1992, and the internet today is by far the shittiest and most scam infested it has ever been in my time (and I doubt it was worse in the 80s)
Few things make me more depressed than thinking about the evolution of the internet, from where it started to where it is today.
I don’t doubt AI will follow a similar path, except somehow it is already starting in a much worse place than the internet ever did, and the downside potential is far greater and frightening.
We did exactly the opposite of what you say, we monitized the scams and turned to entire internet into an scam-ad infested wasteland of the greed of 10 people at the cost of driving the rest of humanity to mindless addictions, grooming, and manipulating human psyche to eek out more money and mass-propagandize.
The majority of the internet is 10x worse than it was 10 or 20 years ago.
This clearly shows that you are willing to talk out of your ass. The early Internet was filled with some of the smartest people alive, the mass of shitty content did not arrive yet because it wasn’t accessible to the masses. At the beginning, the Internet was literally only scientists. A little later, it was only very open people not scared to try something new and excited about the future and about foreign cultures, with corresponding amazing content.
Only after this initial period, when the internet became commonly used, did it turn shit. Stop trying to manifacture arguments for your position and truly do what you acted like setting out to do, understand other people’s concerns.
Reading through this thread and your responses gives the strong impression that you just want to argue while at the same time aren’t very well informed on the matter. Where you do respond its mostly whataboutism rather than actually addressing the comment you are responding to.
Your post asks “Why do people hate AI?” and then goes on to validates many of the commonly heard reasons people have for hating AI. You end with a suggestion that if we could develop AI into something else in the future, it might be good.
So it seems you already understand why people hate AI and are promoting an agenda rather than asking a genuine question.
I gave positives along side the negative, which most people vigorously against AI (which seems to be all of the fediverse) refuse to acknowledge as positives. I do have an agenda, which is to try to understand why there is such a blind and vigorous hate for something I and a lot of people find quite useful, and which could be beneficial for productivity if people use it effectively.
I do have an agenda, which is to try to understand…
If your goal is to understand why people feel the way they do then why are you arguing with people and attempting to refute their responses instead of thoughtfully reflecting their concerns back to them to confirm if you have understood?
Everything you said is why people hate it, and that hate is justified.
My point is that there are downsides, but there are also upsides, just like anything else. The internet in general has dramatically increased electricity usage from what it was previously, but people are acting like AI is adding some unprecedented load on the grid, which in the vast majority of places it is not (despite what a lot of online discussion would have you believe). Any artist or coder uses the art or code of others for inspiration, and yet AI is evil for doing the same? It’s just a lot of negativity without acknowledging the benefit.
The only benefits you’ve mentioned are coding and helping you with D&D characters
Seems like you know the answer to your own question and you’re just looking to argue and tell people who dislike it they’re wrong.
You need to check your facts about the power consumption part
There are upside. Specifically in the medical field, and that’s where it should stay. Everything else is a downside. Humans taking inspiration is one thing. Humans copying directly is called plagiarism, the ai shit is plagiarism. Its taking water from people. Its using more electricity, that’s why they want to build nuclear plants to power them, the tax payer will ultimately foot the bill. Its eating up all the consumer hardware. Driving up costs. Making people stupider. Shoved down our throats everywhere. It hallucinates like mad, its costing people jobs. Its all downsides, except for a niche use case.
It will change society. It won’t improve skills.
Studies already show the opposite at play. arxiv.org/pdf/2506.08872v1
If the LLM could teach you how to code, but couldn’t do the coding for you, it would be a tool for improvement. But it isn’t used that way. Instead of saying “teach me how to code this”, people are more inclined to say “code this for me”.
On top of that, they’re controlled by corporations who are not in the slightest bit interested in your welfare, privacy or economic success. They will invade your privacy, fuck over the environment, fuck over people and load their LLMs with propoganda and barriers that serve their political and social interests.
And as a bonus, they’re a nightmare for the environment.
Having said all of that. I agree, they are going to fundamentally reshape society. But it’s like the industrial revolution. Yeah, we ended up with a more efficient society, but it didn’t make people freer, it further entrenched wealth in the hands of the wealthy, whilst fucking up the environment. That’s what LLMs are going to do.
We could do them differently. That implementation isn’t inherent in their nature. But we won’t do them differently, because the people pushing it want the shitty outcome, because it’s not shitty for them.
AI makes children stupid. AI is mostly used for making AI slop. AI is being used by governments to manipulate public perception. AI is being used to engage is scams. People are using it to cheat. AI is being used to offset critical thinking.
AI is being used by corporations to engage in mass layoffs to save a buck. AI is being used by police stations and federal agencies to identify people, with minimal success (misidentification). AI is being used to deny health claims without review. AI customer service is dogshit.
I was a futurist like you once. I wanted AI based on how the movies presented it. However, the reality is LLMs are being used not for human improvement, but instead for the purpose of creating a permanent underclass with few at the top.
TL;DR: fuck AI.
Chatgpt, list all instances where OP is trying to subvert people’s points with logical fallacies, & burn a couple hundred extra Wh while you’re at it, thanks. I’m sure it’d take less energy for me to do it, but nah
This book is probably more worth ur time than this post: ia801605.us.archive.org/29/items/…/aiboba.pdf It’s An Illustrated Book of Bad Arguments by Ali Almossawi
I love when people dismiss your argument without actually addressing it in any way, instead choosing to focus on pedantic logical fallacy classifications in a theoretical and non-specific way that explains what fallacies you have executed, and where. Good stuff, really convinced me or your side of the argument.
Because it sucks balls, just like anyone that approves of it.
Wow, good discussion. Very civil.
I don’t think people hate AI per se - they hate big tech, and what big tech is doing with it. That’s a legitimate gripe, but it’s not the same thing as the technology being bad.
AI used well can be genuinely useful. I’ve dropped a couple of examples in other threads I won’t rehash here, but the short version is: there are real world uses for this tech (world modelling, medicine, robotics).
Hell I built clinical notes pipeline that takes the tedium of charting from 15-20 mins down to about 3, with a policy gate that rejects LLM output before it ever reaches me if it fails criteria I defined. None that looks anything like the slop-firehose corporate rollout most people are reacting to.
lemmy.world/post/42920187/22058968
lemmy.world/post/44188294/22635793
Worth noting too: taking a black-and-white position on anything is just less cognitively expensive than arriving at a nuanced one. That’s not a character flaw, that’s called “being human”. But that doesn’t mean the nuanced position is wrong.
PS: The electricity/water data centre stuff is maybe more complicated than the headline takes suggest. This might be worth actually reading before treating it as settled.
blog.andymasley.com/…/a-cheat-sheet-for-conversat…
YMMV and ICBW
Good resource there on energy consumption, thanks for sharing. I had heard some things about the energy use being over stated, or over focused on, but that is a very comprehensive outline of exactly the overall impact.
Hope it helped.
Lot of people with legitimate complaints, lots more bandwagon people.
Mostly anti-intellectualism and ego, as far as I can tell. Also, conflating someone’s business practices with a technology.
I have split opinions on it. The massive data centers that use so much power, hardware, space, etc are problematic. They get used as a replacement for human ingenuity, scarf down every bit of data the can, aggregate it together in a way that even the owners don’t fully understand. They get manipulated to give answers that suit the owner’s wishes and fuel divisions in public discourse.
I take significantly less issue with locally run small model systems that you can put on your own machine. They’re not continually running/training and are generally treated more as a hobby toy, not some replacement for human understanding.
Totally agree on the smaller scale local models. But this is my exact point here, it’s not inherently evil, it’s the implementation and systemic issues that are the issue. Yet people act like all AI is inherently evil and wrong.
@rabiezaater@piefed.social @nostupidquestions@lemmy.world
generating ideasLLMs don't generate ideas, stricto sensu. They do, and I find it useful for esoteric (gnosis through chaos magick) purposes, output names and words unbeknownst to the user (this is how I, as an ESL person, learned some words I didn't know before).
learn to codeAs someone who codes since my childhood, I wouldn't suggest relying on LLMs for that. They could be used to output a descriptive text about some function or library, but you must know LLMs are statistical machines, the output text is a chain of "which token is the most probable next?", an auto-completing only slightly "better" than, say, Gboard's auto-complete. They "hallucinate" precisely because they rely on statistics and randomness.
d&d [...] I need a character [...] it makes it up quickYes, this is one of the use cases where LLMs can thrive, as a dice with hundreds of billions of sides.
get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consumeTotally agree with you in this regard. Throughout the history, humans relied on other humans' "ideas". Most of the novelty stemmed from "what if I were to take this flamey thing that consumed the tree I used to sit on, and put it under this food?", mashing up existing things. If we really were to appeal, evolution is that, merging two genetic sequences in an approximate manner while trying to replicate, still I don't see humans accusing newborn of "stealing genetic work from their ancestors".
definitely useful in a lot of ways, [..] if [...] developed on a more localized and decentralized scaleI totally agree in this regard, too.
Well, you dismissed the lack of ethics of it all. Just because you do open source doesn’t mean everyone else does. And open source often acknowledges contributors, unlike LLMs. You can’t consent for other people.
It’s hideously destructive. Wastes electricity, wastes water, plays merry hell with anywhere the damned data centers pop up.
It’s unregulated and has already killed people. Multiple stories have come out where an LLM has encouraged suicide. Plus various dangerous outputs like the bleach as cake ingredient thing. Because…
It isn’t intelligent, it’s just a parrot. I’ll start paying attention when it can successfully count letters in words. So would you trust a random parrot that told you about something you know nothing about?
It doesn’t do a quarter of what it says. Translation should be its bread and butter and it can’t really manage that. There’s a reason the tech bros that hyped crypto are hyping this. Because they don’t actually know what it can or can’t do.
It’s approaching max efficacy for current techniques. More data is better in machine learning, but it’s finding the limit and it’s way closer than the scammers want to admit.
It’s destroying jobs before it can handle them. I’ve tried to use it before. I spent as much if not more time fixing its output than if I had done it myself. It gets to do my boilerplate sometimes now.
It’s making worse workers. All that time agonizing over a problem was spent learning how to do it at all. Now it shits out worthless garbage that the person doesn’t know what it does or how to fix it. Job security for me I guess.
It could be a useful technology, but the delusion that it’s capable of becoming AGI distracts from all the things it could be capable of if big companies actually tried to use them instead of the lazy implementations they’re chasing.
Source: Data engineer
I don’t, not in general.
There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn’t have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.
But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.
I don’t agree with talking points like “AI companies should be required to pay copyright holders of their training data” or “AI is bad because of the environmental impact” or “AI is bad because of RAM prices” or “AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users’ suicide)” or such things; I think all of these are nonsense.
I believe in general that AI gets too much attention in the media. It’s really not that impactful.
There has to be a liabillity standard tho, otherwise it completely does away with any possibillity of even nominal accountabillity. If harm is caused because of a human, there is liabillity (whether directly or to whoever is responsible for that persons actions). The same should be true for whoever employs LLM for some purpose that results in harm. The LLM cant be jailed or “shutdown” really, its incumbent upon the handler to assume liabillity for the activities they are involved with
whoever employs LLM
incumbent upon the handler to assume liabillity
I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.
But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.
Glad to see some sanity for once on here. It’s definitely not all good, but it’s not all bad either, and when people attribute all the evils of the world to it, they are being disingenuous.
What do you mean when you say AI?
Are you talking about all the different areas of research or just LLMs?
Both. I think people don’t even realize that there are non llm AI applications, and it has done a disservice to the field in general.
LLMs are interesting, and there are some very promising applications, but I’m concerned that the hype is going to damage the reputation of the technology in a way that could interfere with those things.
Regarding all other AI, there’s a lot of good that has come from AI research, and most people don’t recognize it. We have a tendency to shift our definition of “intelligence” to always exclude things that someone figures out how to get a computer to do.
Every day we use software that would have been considered AI years ago.
I’m not against AI, but I’m against the capitalist impulse to squeeze money out of anything to the detriment of all of humanity and the world.
My hope is that the LLM bubble bursts, big companies suffer terribly, the “AI” tag becomes bad marketing, and they let AI quietly return to research, where people can do some good with it.
Judging by the comments, I would say that most Lemmy users are aware of the downsides of LLMs. The average GPT user probably hasn’t heard of half the points mentioned in these comments.
Judging by the downvotes, I would say that many Lemmy users are also very passionate about it. The average GPT user might think of LLMs like any other tool.
Unfortunately, I get the feeling that Lemmy isn’t a suitable place for having a serious conversation about AI in general (not just LLMs). I would love to have that conversation, but this just isn’t the place for it, as you can see. The people here seem to be too focused on LLMs, how they’re developed and how they’re forcibly implemented in places where they provide zero value etc. AI in general is such a broad category, and this kind of biased conversation misses 90% of it.
When you say AI, people hear LLM, and that’s a genuine problem. When people say they hate AI, they probably aren’t thinking of things like image search, optical character recognition, automatic categorization of the events of your bank account, signal processing in audio and video, image upscaling, frame generation, design of 3D structures, route planning etc. There’s so much you can do with AI, but Lemmy users rarely mention those.
Yea, I am really getting disillusioned with the discussions on the fediverse around a lot of important topics, not just AI. I could picture a response from someone in this thread as “good, fuck off AI shill”. Not a very productive or healthy place for a discussion, as much as I support the goals and motivations behind the fediverse. Apparently there is an anti-ai zealotry that makes real dialogue impossible.
People who hate AI already have their !fuck_AI@lelmmy.world community, and it seems to be leaking absolutely everywhere. How about all the other conversations that aren’t centered around hating AI? Surely, there’s a place for that too.
AI is great, LLMs are a waste. This has been the case for years before LLMs.
LLMs which the current hype calls AI are the equivalent of a scammy car salesman. To your example of have AI teach you to code - AI is awful at coding. It produces code that is the average of a junior developer’s output. It will look awesome from the outside because it will often mostly work at first, but in reality it’s going to be an unmaintainable mess. An experienced engineer could use one and produce a good outcome, in some cases may be faster than without and in others slower - but the experienced engineer requirement is a must. What this means is your AI teacher by itself is a junior engineer, whose output wouldn’t be trusted by themselves. That’s the level you’ll reach and may even learn and pick up terrible habits that’ll set you back.
It will do all that and consume a ridiculous amount of resources for it compared to following a YouTube course.
I imagine a similar case is true for most industries, people who work in the industry see the absolute garbage coming out of it in large quantities and have to listen to people from the outside who don’t know what good looks like in that context keep saying “oh you are now redundant cuz look how good ai is”.
Meanwhile, it is trained on data stolen from the people who are now losing their jobs because the idiotic decision makers are on the side of believing how good the output looks like. AND there is more, it’s doing it wasting a massive amount of resources, which drives up the prices for everyone (think all electrical devices needing computers, electricity prices). But what what money are they using for it? Oh yes! The money generated out of thin air by the corporations generating this massive AI bubble, which is most likely going to end with a crash that will decimate the market (and therefore the investments and pensions of people). And if the past is any indication, the government will prop the companies up with tax money - so people will pay for it twice.
TheAlbatross@lemmy.blahaj.zone 3 weeks ago
In my experience, people who use LLMs as educational tools… don’t actually learn very well. They think they are, but they don’t retain the knowledge nor do they seem able to infer from or apply the knowledge very well. There are even some early studies that are showing that using LLMs decreases cognitive ability, and considering how many kids and young people are using it to get their way though school and even higher education… I think we’re using AI to raise a generation of stunted minds. That’s going to be a bigger issue as time goes on and with the state of the world and who owns the LLMs… it looks like a grime, sad future thanks to this tech.
rabiezaater@piefed.social 3 weeks ago
I would definitely be curious to see the research on that. I do think there are dangers with regard to relying on AI too heavily, but as a complement to existing technologies, I don’t think it can hurt any more than a calculator hurts your ability to do math.
TheAlbatross@lemmy.blahaj.zone 3 weeks ago
Here ya go.
It’s important to remember that it doesn’t really matter how you, personally, use their product or think it should be used, it matters how it is used by large swaths of society. Don’t get fooled into promoting some billionaire’s tool to shift wealth further upward and further denigrate the working class in your quest to get out of spending 15 minutes searching for the right D&D character picture.
TheAlbatross@lemmy.blahaj.zone 3 weeks ago
Here’s another fun piece you can read.
The incoming AI apocalypse isn’t about Skynet drones or malicious AGI, it’s about creating generations of vastly less educated and cognitively deficient lower classes and restricting traditional education to the wealthier echelons of society, gatekeeping the poor out by cost alone.
givesomefucks@lemmy.world 3 weeks ago
Because using AI atrophies the part of your brain that handles critical thinking…
The more you use it, the less you notice how you can’t do things without it.
If AI worked, that would be normal. The problem is it’s just good at conning people into believing it.
That’s why you can’t realize if it ever takes off and people start using it, they’re going to make it shittier and more expensive.
But again, the people already relying on AI have lost the critical thinking to see that coming. It’s like a bus driver closing their eyes because a bridge is closed. The bridge is still closed, they didn’t solve any problems. They just don’t see it coming now.
What you’re doing is asking all the passengers why they’re still screaming if all they need to do is close their eyes…