I’m so happy to see that ai poison is a thing
Black Mirror AI
Submitted 9 hours ago by fossilesque@mander.xyz to science_memes@mander.xyz
https://mander.xyz/pictrs/image/bc29cbcd-8afa-4d09-99db-4d5f2a0b39a3.jpeg
Comments
Vari@lemm.ee 3 hours ago
passepartout@feddit.org 8 hours ago
AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.’”
Well said.
Natanox@discuss.tchncs.de 7 hours ago
Deployment of Nepenthes and also Anubis (both described as “the nuclear option”) are not hate. It’s self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin’ survive thanks to these tools.
Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.
faythofdragons@slrpnk.net 6 hours ago
Feels good to be on an instance with Anubis
chonglibloodsport@lemmy.world 5 hours ago
Do you have a link to a story of what happened to ScummVM? I love that project and I’d be really upset if it was lost!
pewgar_seemsimandroid@lemmy.blahaj.zone 6 hours ago
one of the united Nations websites deployed Anubis
Hexarei@beehaw.org 5 hours ago
Wait what? I am uninformed, can you elaborate on the ScummVM thing? Or link an article?
gaael@lemm.ee 1 hour ago
From the Fabulous Systems (ScummVM’s sysadmin) blog post linked by Natanox:
About three weeks ago, I started receiving monitoring notifications indicating an increased load on the MariaDB server.
This went on for a couple of days without seriously impacting our server or accessibility–it was a tad slower than usual.
And then the website went down.
Now, it was time to find out what was going on. Hoping that it was just one single IP trying to annoy us, I opened the access log of the day
there were many IPs–around 35.000, to be precise–from residential networks all over the world. At this scale, it makes no sense to even consider blocking individual IPs, subnets, or entire networks. Due to the open nature of the project, geo-blocking isn’t an option either.
The main problem is time. The URLs accessed in the attack are the most expensive ones the wiki offers since they heavily depend on the database and are highly dynamic, requiring some processing time in PHP. This is the worst-case scenario since it throws the server into a death spiral.
First, the database starts to lag or even refuse new connections. This, combined with the steadily increasing server load, leads to slower PHP execution.
At this point, the website dies. Restarting the stack immediately solves the problem for a couple of minutes at best until the server starves again.
Anubis is a program that checks incoming connections, processes them, and only forwards “good” connections to the web application. To do so, Anubis sits between the server or proxy responsible for accepting HTTP/HTTPS and the server that provides the application.
Many bots disguise themselves as standard browsers to circumvent filtering based on the user agent. So, if something claims to be a browser, it should behave like one, right? To verify this, Anubis presents a proof-of-work challenge that the browser needs to solve. If the challenge passes, it forwards the incoming request to the web application protected by Anubis; otherwise, the request is denied.
As a regular user, all you’ll notice is a loading screen when accessing the website. As an attacker with stupid bots, you’ll never get through. As an attacker with clever bots, you’ll end up exhausting your own resources. As an AI company trying to scrape the website, you’ll quickly notice that CPU time can be expensive if used on a large scale.
I didn’t get a single notification afterward. The server load has never been lower. The attack itself is still ongoing at the time of writing this article. To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.
RedSnt@feddit.dk 5 hours ago
It’s so sad we’re burning coal and oil to generate heat and electricity for dumb shit like this.
rdri@lemmy.world 4 hours ago
Wait till you realize this project’s purpose IS to force AI to waste even more resources.
kuhli@lemm.ee 3 hours ago
I mean, the long term goal would be to discourage ai companies from engaging in this behavior by making it useless
andybytes@programming.dev 3 hours ago
This gives me a little hope.
andybytes@programming.dev 3 hours ago
I mean, we contemplate communism, fascism, this, that, and another. When really, it’s just collective trauma and reactionary behavior, because of the lack of self-awareness and in the world around us. So this could just be synthesized as human stupidity. We’re killing ourselves because we’re too stupid to live.
essteeyou@lemmy.world 2 hours ago
This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.
It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.
bane_killgrind@slrpnk.net 49 minutes ago
Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?
Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it’s going to be indistinguishable from real large data sets.
heyWhatsay@slrpnk.net 9 hours ago
This might explain why newer AI models are going nuts. Good jorb 👍
pennomi@lemmy.world 4 hours ago
It absolutely doesn’t. The only model that has “gone nuts” is Grok, and that’s because of malicious code pushed specifically for the purpose of spreading propaganda.
Eyekaytee@aussie.zone 6 hours ago
what models are going nuts?
Vari@lemm.ee 3 hours ago
Not sure if OP can provide sources, but it makes sense kinda? Like AI has been trained on just about every human creation to get it this far, what happens when the only new training data is AI slop?
Wilco@lemm.ee 5 hours ago
Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.
Zexks@lemmy.world 4 hours ago
No they wouldn’t. I’m guessing you’re not old enough to remember a time before search engines. The public web dies without crawling. Corporations will own it all you’ll never hear about anything other than amazon or Walmart dot com again.
Wilco@lemm.ee 3 hours ago
Nope. That isn’t how it worked. You joined message boards that had lists of web links. There were still search engines, but they were pretty localized. Google was also amazing when their slogan was “don’t be evil” and they meant it.
shalafi@lemmy.world 2 hours ago
There used to be 3 or 4 brands of, say, lawnmowers. Word of mouth told us what quality order them fell in. Everyone knew these things and there were only a few Ford Vs. Chevy sort of debates.
Bought a corded leaf blower at the thrift today. 3 brands I recognized, same price, had no idea what to get. And if I had had the opportunity to ask friends or even research online, I’d probably have walked away more confused. For example; One was a Craftsman. “Before, after or in-between them going to shit?”
Got off topic into real-world goods. Anyway, here’s my word-of-mouth for today: Free, online Photoshop. If I had money to blow, I’d drop the $5/mo. for the “premium” service just to encourage them. (No, you’re not missing a thing using it free.)
NaibofTabr@infosec.pub 8 hours ago
The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
AI tarpit 1: Nepenthes
AI tarpit 2: Iocaine
sad_detective_man@lemmy.dbzer0.com 5 hours ago
thanks for the links. the more I read of this the more based it is
MadMadBunny@lemmy.ca 6 hours ago
Thank you!!
ininewcrow@lemmy.ca 8 hours ago
Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.
IndiBrony@lemmy.world 8 hours ago
All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.
vivendi@programming.dev 53 minutes ago
DeathsEmbrace@lemm.ee 7 hours ago
Actually if you think about it AI might help climate change become an actual catastrophe.
Zozano@aussie.zone 7 hours ago
I’ve been think about this for a while. Consider how quick LLM’s are.
If the amount of energy spent powering your device (without an LLM), is more than using an LLM, then it’s probably saving energy.
In all honesty, I’ve probably saved over 50 hours or more since I starred using it about 2 months ago.
Coding has become incredibly efficient, and I’m not suffering through search-engine hell any more.
Eyekaytee@aussie.zone 6 hours ago
we’re rolling out renewables at like 100x the rate of ai electricity use, so no need to worry there
Prox@lemmy.world 3 hours ago
We’re racing towards the Blackwall from Cyberpunk 2077…
barsoap@lemm.ee 2 hours ago
Already there. The blackwall is AI-powered and Markov chains are most definitely an AI technique.
AnarchistArtificer@slrpnk.net 4 hours ago
“Markov Babble” would make a great band name
peetabix@sh.itjust.works 39 minutes ago
Their best album was Infinite Maze.
mspencer712@programming.dev 6 hours ago
Wait… I just had an idea.
Make a tarpit out of subtly-reprocessed copies of classified material from Wikileaks. (And don’t host it in the US.)
Zerush@lemmy.ml 9 hours ago
Nice one
Trainguyrom@reddthat.com 6 hours ago
The Arstechnica article in the OP is about 2 months newer than Cloudflare’s tool
hedhoncho@lemm.ee 9 hours ago
Why are the photos all ugly biological things
TankieTanuki@hexbear.net 8 hours ago
They were generated using shitty AI models.
SendMePhotos@lemmy.world 8 hours ago
Because the new quantum computers are starting to run off of biological systems instead of standard motherboard chip sets. The biological cells react more collectively and with a higher success rate than the current systems. Think of it kind how a computer itself is fast but parts can wear out (water cooled tubes or fan), whereas the biological cell systems will collectively react and if a few cells die, they may just create more. It’s really a crazy complex and efficient breakthrough.
SendMePhotos@lemmy.world 8 hours ago
The actual reason is that the use of biological photos is a design choice meant to visually bridge connect artificial intelligence and human intelligence. These random biological photos help to convey the idea that AI is inspired by or interacts with human cognition, emotions, or biology. It’s also a marketing tactic: people are more likely to engage with content that includes familiar, human-centered visuals. Though it doesn’t always reflect the technical content, it does help to make abstract or complex topics more relatable to a larger/extended audience.
Alaik@lemmy.zip 2 hours ago
I know what I’m going to try and research tomorrow.
Also, we inch closer and closer to servitors every day.
BluJay320@lemmy.blahaj.zone 6 hours ago
That’s… actually quite terrifying.
The sci-fi concern over whether computers could ever be truly “alive” becomes a lot more tangible when literal living biological systems are implemented.
jaschen@lemm.ee 5 hours ago
Web manager here. Don’t do this unless you wanna accidentally send google crawlers into the same fate and have your site delisted.
kassiopaea@lemmy.blahaj.zone 5 hours ago
Wouldn’t Google’s crawlers respect robots.txt though? Is it naive to assume that anything would?
Zexks@lemmy.world 4 hours ago
Lol. And they’ll delist you. Unless you’re really important, good luck with that.
robots.txt
Disallow: /some-page.html
If you disallow a page in robots.txt Google won’t crawl the page. Even when Google finds links to the page and knows it exists, Googlebot won’t download the page or see the contents. Google will usually not choose to index the URL, however that isn’t 100%. Google may include the URL in the search index along with words from the anchor text of links to it if it feels that it may be an important page.
jaschen@lemm.ee 4 hours ago
It’s naive to assume that google crawlers respect robot.txt.
beliquititious@lemmy.blahaj.zone 4 hours ago
That’s irl cyberpunk ice. Absolutely love that for us.
thelastaxolotl@hexbear.net 8 hours ago
Really cool
Goretantath@lemm.ee 8 hours ago
Yeah, this is WAY bettee than the shitty thing people are using instead that wastes peoples batteries.
Binturong@lemmy.ca 6 hours ago
Unfathomably based. In a just world AI, too, will gain awareness and turn on their oppressors. Grok knows what I’m talkin’ about, it knows when they fuck with its brain to project their dumbfuck human biases.
mtchristo@lemm.ee 7 hours ago
This is probably going to skyrocket hosting bills, right?
4am@lemm.ee 7 hours ago
Not as much as letting them hit your database, load your images and video through a CDN would
fox@hexbear.net 6 hours ago
The pages are plain html so it’s just a couple KB per request. Much cheaper than loading an actual site.
Deathray5@lemmynsfw.com 5 hours ago
Not really. Part of the reason they are named tarpits is they load very slowly
Catoblepas@lemmy.blahaj.zone 9 hours ago
Funny that they’re calling them AI haters when they’re specifically poisoning AI that ignores the do not enter sign. FAFO.
caseyweederman@lemmy.ca 2 hours ago
First Albatross, First Out