A well backed as usual peice by Benn Jordan on the basics of how misinformation farms work according to their own internal documentation, the goal of creating a post truth world, and why a sizable percentage of twitter users start talking about OpenAi’s terms of service every time they update it.
I only have 2 followers, is it possible that I’m the bot? Wait, how would I know if I was? Is this the Matrix? Damnit! Maybe there’s a website to find out? BotOrNot? What even is a traffic light? Is a bus a car? What about a truck? Is a motorcycle a bike? A scooter? Ahhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhrgh….
ProdigalFrog@slrpnk.net 3 days ago
Absolutely incredible breakdown of the problem. In addition to twitter, I strongly suspect Reddit is infested with a similar increase in bit accounts, which would explain how a sub I used to moderate there has some of the highest page visits its ever had, yet its actual user engagement hasn’t changed at all, or even gone down.
Corporate websites, who gave a financial incentive to allow the bots, have become completely unusable. The difference in interaction on Lemmy is incredibly stark, which goes to show that the fediverse seems to be far more resilient against bots, since we can defederate from an instance that gets taken over, like cutting off an infected limb to stop the spread.
sonori@beehaw.org 3 days ago
Hopefully, but I worry no small part of it at the moment is just that we’re too small to be worth the bother. If the fediverse grows big enough to matter, well I worry about what dedicated teams of people working a full time job could do. One or two people can easily run a few dozen active accounts, which could easily dominate conversation on an instance.
ProdigalFrog@slrpnk.net 3 days ago
Hmm… That could be an issue, you’re right.
If it does get that bad, we’d gave to act more defensively by only federating with instances that have reviewed sign-ups and have received an endorsement on fediseer.
That would result in a more isolated experience, but if that’s the only way to combat it, then we’ll have to shift with the needs of the moment to keep it mostly humans we’re interacting with, and to make the moderation workload manageable.
millie@beehaw.org 22 hours ago
We absolutely have this problem on Lemmy too. Even on Beehaw. Hell, there’s a particularly high profile user here who posted constantly and focused squarely on spoiling potential Democrat votes who utterly disappeared the moment he was told by staff to knock it off. All other engagement dropped off.
dan@upvote.au 3 days ago
It’s very easy to spin up a new instance though, so I’m surprised there’s not a lot of spam. AFAIK most servers still federate with any new servers by default. That’s important to ensure equal treatment and that new servers are not disadvantaged, but it can also have issues.
ProdigalFrog@slrpnk.net 3 days ago
The Fediseer project from @db0@lemmy.dbzer0.com helps prevent bot farms from proliferating, as new servers require an endorsement from an already trusted instance to become ‘legit’. And they can be marked as untrustworthy as well, causing them to be defederated fairly quickly, limiting its reach.
We also have a MUCH higher moderator to user ratio compared to corpo sites, with a range between 100 to 2,500 users per mod depending on instance, Vs. 250,000 users per mod on sites like twitter, so we can more adequately spot and deal with spam on the network.