The cat’s out of the bag. Focus your energy on stopping fascist oligarchs then regulating AI to be as green and democratic as possible. Or sit back and avoid it out of ethical concerns as the fascists use it to target and eliminate you.
Except for the ethical question of how the AI was trained, or the environmental aspect of using it.
ruuster13@lemmy.zip 1 day ago
iAmTheTot@sh.itjust.works 1 day ago
Holy false dichotomy. I can care about more than one thing at a time. The existence of fascists doesn’t mean I need to use and like AI lmao
MoogleMaestro@lemmy.zip 1 day ago
The cat’s out of the bag
That’s 👏 not 👏 an 👏 excuse 👏 to be 👏 SHITTY!
The number of people who think that saying that the cat’s out of the bag is somehow redeeming is completely bizarre. Would you say this about slavery too in the 1800s? Just because people are doing it doesn’t mean it’s morally or ethically right to do it, nor that we should put up with it.
teawrecks@sopuli.xyz 1 day ago
No one 👏👏 is 👏👏 excusing 👏👏 being 👏👏 shitty.
The “cat” does not refer to unethical training of models. Tell me, if we somehow managed to delete every single unethically trained model in existence AND miraculously prevent another one from being ever made (ignoring the part where the AI bubble pops) what would happen? Do you think everyone would go “welp, no more AI I guess.” NO! People would immediately get to work making an “ethically trained” model (according to some regulatory definition of “ethical”), and by “people” I don’t mean just anyone, I mean the people who can afford to gather or license the most exclusive training data: the wealthy.
“Cat’s out of the bag” means the knowledge of what’s possible is out there and everyone knows it. The only thing you could gain by trying to put it “back in the bag” is to help the ultra wealthy capitalize on it.
NoForwardslashS@sopuli.xyz 1 day ago
The world is on fire, but if you don’t add fire to the fire, you might get burned.
NotASharkInAManSuit@lemmy.world 1 day ago
13igTyme@piefed.social 1 day ago
There’s more to AI than LLM.
Bronzebeard@lemmy.zip 1 day ago
No one [intelligent] is using an LLm for workflow organization. Despite what the media will try to convince you, Not every AI is an LLM or even and LLM trained on all the copyrighted shit you can find in the Internet.
Deceptichum@quokk.au 1 day ago
There is no ethics under capitalism, so that’s a moot point.
TankovayaDiviziya@lemmy.world 6 hours ago
You can try to be ethical under capitalism, but it will be almost impossible.
Hackworth@piefed.ca 1 day ago
There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.
utopiah@lemmy.world 1 day ago
Can you please share examples and criteria?
Fmstrat@lemmy.world 5 hours ago
www.swiss-ai.org/apertus
Fully open source, even the training data is provided for download. That being said, this is the only one I know of.
utopiah@lemmy.world 3 hours ago
Thanks, a friend recommended it few days ago indeed but unfortunately AFAICT they don’t provide the CO2eq in their model card nor an analogy equivalence non technical users could understand.
dogslayeggs@lemmy.world 1 day ago
Sure. My company has a database of all technical papers written by employees in the last 30-ish years. Nearly all of these contain proprietary information from other companies (we deal with tons of other companies and have access to their data), so we can’t build a public LLM nor use a public LLM. So we created an internal-only LLM that is only trained on our data.
Fmstrat@lemmy.world 5 hours ago
I’d bet my lunch this internal LLM is a trained open weight model, which has lots of public data in it. Not complaining about what your company has done, as I think that makes sense, just providing a counterpoint.
utopiah@lemmy.world 6 hours ago
You are solely using your own data or rather you are refining an existing LLM or rather RAG?
I’m not an expert but AFAIK training an LLM requires, by definition, a vast mount of text so I’m skeptical that ANY company publish enough papers to do so. I understand if you can’t share more about the process. Maybe me saying “AI” was too broad.
tb_@lemmy.world 8 hours ago
Completely from scratch?
oplkill@lemmy.world 1 day ago
It can use public domain licenced data
utopiah@lemmy.world 6 hours ago
Right, and to be clear I’m not saying it’s not possible. This isn’t a trick question, it’s a genuine request to hopefully be able to rely on such tools.
Hackworth@piefed.ca 23 hours ago
Adobe’s image generator (Firefly) is trained only on images from Adobe Stock.
utopiah@lemmy.world 6 hours ago
Does it only use that or doesn’t it also use an LLM to?
riskable@programming.dev 1 day ago
It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).
Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.
acosmichippo@lemmy.world 1 day ago
colloquially today most people mean genAI like LLMs when they say “AI” for brevity.
that’s not the point at all. the point is, even before AI, our energy needs have been outpacing our ability/willingness to switch to green energy. We are STILL using more fossil fuels than at any point in the history of the world. Now AI is just adding a whole other layer of energy demand on top of that.
riskable@programming.dev 1 day ago
The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.
AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.
Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.
dogslayeggs@lemmy.world 1 day ago
Power source is only one impact. Water for cooling is even bigger. There are data centers pumping out huge amounts of heat in places like AZ, TX, CA where water is scarce and temps are high.
lightnsfw@reddthat.com 23 hours ago
Is the water “consumed” when used for this purpose? I don’t know how data centers do it but it wouldn’t seem that it would need to be constantly drawing water from a local system. They could even source it from elsewhere if necessary.