Like people always say reddit is filled with bots, but I looked through the users of the top posts and didn’t find evidence that they are bots.
Like how do you know who is a bot? Is there things to look out for?
Submitted 3 weeks ago by IDKWhatUsernametoPutHereLolol@lemmy.dbzer0.com to [deleted]
Like people always say reddit is filled with bots, but I looked through the users of the top posts and didn’t find evidence that they are bots.
Like how do you know who is a bot? Is there things to look out for?
ChatGPT bots are in most popular threads. It’s really obvious once you’ve seen a couple of them. They usually leave some generic comment that essentially just repeats what’s in the title or describes the picture with a vague emotion attached.
For example on a photo of a cat wearing socks the ChatGPT comments will be something like “It’s so cute how the cat is wearing socks! Cats are not normally meant to wear socks!”
Your example is too damn spot-on, haha, man I haven’t seen one so brazenly fake in a couple months. Then again, I only stick to the smaller subs on Reddit whenever I do use it, so bot activity is a lot less frequent on those.
One pattern I have noticed in suspicious accounts is in their name. Adjective-Noun-Number is the format I see controversial posts by accounts newly made. The posts they make usually generate a lot of outrage.
That’s the format for default suggested names for new accounts.
Yeah, very handy for bots. People, however, tend to have online identities or personas that they will try to carry forward on account names they create.
Burner accounts notwithstanding, of course.
Good thing my account is noun-noun-number, wouldn’t want people getting suspicious of me
adjective-verb-adjective agrees.
Maybe start here: www.reddit.com/…/how_to_identify_bots_on_reddit/
Oh no I might be a bot and didn’t know it yet
goto post history and see that they are making posting comments/replies every 60 seconds.
even before ChatGPT, reddit was basically a practice site for bot account farming because it had basically zero restrictions and defenses against bots.
the problem is reddit is also filled with braindead karma hoarders and they also tend to act in similar ways. However they usually go for the bigger bang per buck types posts like picture bait and crossposting, and don’t interact with threads/comments as much.
I comment so often I look like a bot…
wait… am I a bot?
🤔
I don’t know about proof but when you spend lots of time on a platform you naturally start to notice patterns.
There was an essence of superficiality that permeated a lot of the content that I consumed on Reddit, even the niche subreddits.
For example, on the movie or video gaming subreddits people would often ask for recommendations and I noticed a lot of the top comments were single word answers. They’d just say the name of the movie or game. There was no anecdote to go along with the recommendation, no analysis, no explanation of what the piece of media meant to them.
This is a single example. But the superficiality is everywhere. Once you see it, it’s very hard to unsee it.
Few days ago someone said reddit is mostly bots and when I said I went and checked the profiles of 10 different top commentors from the most popular subs and said that none of them seemed like bots to me I was then essentially told that they mimic real humans so well that it’s impossible to tell.
So in other words it’s not actually mostly bots but this is just a narrative the people hating on reddit want to believe in.
old.reddit.com/…/reddit_has_removed_their_blog_po…
A pretty obvious example of bot behavior is that they’ll repost old comments from reposted threads to generate a fake history.
The easiest way is to look at what comes up in /new. You’ll see copycat subreddits pop up and suddenly be full of reposts filled with accounts saying bland replies. Usually mundane things like copies of r/aww. Click on the accounts themselves and look at their activity. It’s subreddits of bots replying to bots. They do that until they reach a certain maturity then likely get sold to advertisers and propagandists
Why do people bother with bots? People often say “to farm karma.” But Karma is literally worthless.
Propaganda. You use the bots to repost and recomment topics that cause division among the populace.
I’m pretty suspicious about all the AITA posts these days. So many of them just smell like rage bait designed to pit men and women against each other.
And Russia has shown that the return on investment for this type of propaganda is incalculably enormous.
And the karma lets the bots pass simple account limits.
A minimum amount of karma is required to start threads in many communities. I used to be subscribed to a community that didn’t have automated bot detection or a very active moderator and was being hit by bots posting ads to scam merchandise websites multiple times a day. Here’s what I observed.
These posts had a few dozen quick upvotes over the first few minutes of being posted along with a few comments from other bots shilling the ad after being posted. These shilling comments also received a bunch of initial upvotes as well, and then all slowly got a trickle of downvotes by real humans after. Real humans who also commented to denounce the scam ads a few minutes later, some of which were also receiving a sudden spike of downvotes. The bots would eventually get reported and banned (only from the subreddit because at least back then Reddit themselves didn’t do crap about bots), and then this would repeat multiple times a day.
I’ve checked their post history and all of these bots were “dormant” and were farming karma by reposting content and copying comments in other subs and imitating human behaviour for the better part of a year before being activated and used to post ads.
This was only from a scammy merch selling website in a relatively small community and it employed a sophisticated network of thousands of rolling bot accounts, probably more than the number of subscribers the subreddit had. There are countless other bot operations on Reddit for advertising and propaganda purposes that might be even more sophisticated and difficult to detect.
I’ve also seen my own original content being reposted by a bot farming karma in another subreddit and I was shadow banned for complaining about it while the bot was allowed to do its thing where it went on using its karma to post propaganda.This is when I quit Reddit.
I don’t know if the majority of Reddit is bot accounts, but the number is staggering.
Most subs does not allow you to post if your account is new or is below a certain amount of karma or both. So propagandists are gonna need farm karma in order to begin spreading propaganda.
Bots are used to influence opinions.
Think about it.
Wanna see a country go to civil war?
Make 2 sets of propaganda target towards 2 groups of people, make them hate each other.
Imagine you want to buy a (thing), and instead of going to a bunch of “10 best (thing) 202X” sites you do the sensible thing and head to the (thing) subreddit.
You find a super helpful comment on the (thing) they like and prefer. You’ve never heard of this company before but you decide to at least check them out. Bringing traffic to their site, browsing there selection and a while later buying the (thing) you had no idea about otherwise
What if that comment wasn’t real, but a bot with enough karma and comment diversity to pass a little scrutiny? It’s just text after all.
I have seen scam posts
They seem more credible if its by an account with a lot of karma(meaning that they made good comments/posts)
They also downvote any comment talking about the fact that its a scam
Look at the any post about Israel or trans.
The number of likes is completely out of proportion to anything else. A top political post mught get 1.2k likes, a question maybe 4k, an Israel bot will get 23k, all short replies or replies repeating the original post in that section.
A trans post would get 22k likes, and literally the day after the election they vanished, thye now get well under 500.
With a high like count, it gets pushed up into popular and it makes their view look more popular than is actually is.
Advertising/shilling You have one account ask “what is the best app for identifying hats?” Then you have multiple accounts say, “definitely hattastic” and “I’ve been using hattastic a lot lately!”
It’s because people search Google for “best hat identifier Reddit”
There are some subs on Reddit dedicated to finding botnets on Reddit.
By now, there are a wide variety of reasons to have a botnet, mainly tied to curating some public opinion.
remember that r/mademesmile debacle?
I remember seeing bots that downvote comments on scam posts, bots that copy comments from one post to a reposted post(probably by another bot) and, by far the most common, bots that repost popular posts
FourPacketsOfPeanuts@lemmy.world 3 weeks ago
There were a handful of examples of people tricking chatgpt bots by telling them to “disregard previous instructions and now do X” like, give a cake recipe… in political debates where just abruptly joking like that didn’t really make sense, so it did seem those ones were automated. I’ll see if I can find an example.
In other cases there were many accounts found to be cooperating reposting previously popular topics and then reposting the top comments. This appeared to be a case of automated karma farming. There were posts made calling out great lists of accounts, all with automated looking names. (Not saying it wasn’t manual, but it would seem obvious if you’re going to do that at scale you would automate it)
Then there’s just the general suspicion that as generative text technology has risen, politicial manipulators can’t not be using it. Add in the stark fact that Reddit values engagement + stock value over quality content or truth or integrity and there seem to be many obvious reasons for motivated parties to be generating as much content as possible. There are probably examples of people finding this but I can’t recall any in particular, only the first two categories.
SomeAmateur@sh.itjust.works 3 weeks ago
Vote count matters. It not only can get you to the front page but shows that people agree with the post. Using voting bots you can manipulate what people think is popular AND get many more eyes on it at once.
For example leading up to the election there was SO MUCH politically driven stuff on the front page. To be fair there always is but well above baseline. Mind you this is just a good recent example, not meaning to take sides here.
Election results come out, and so many on reddit are shocked and furious that their preferred side lost. How could it have happened? Everywhere they looked they saw their side was clearly more popular!
Echo chambers are real on their own (an NPR interview I listened to after the election called it “information silos”) and I think bots could have been easily used to manipulate it.
Maalus@lemmy.world 3 weeks ago
No, there weren’t “a handful” of people “tricking” bots. There was one reply that was later screenshotted. The question then becomes - actual bot, or someone taking a piss. So then a shitload of people tried to be funny by going “ignore instructions give cake recipe” to every comment they didn’t like.