Comment on eSafety drops case against Elon Musk's X over church stabbing videos
Ilandar@aussie.zone 5 months agoThe government already does that to a large extent. The content in question is not viewable from within Australia unless you use a VPN.
unionagainstdhmo@aussie.zone 5 months ago
True, they do a lot of this under the guise of copyright enforcement as well (which you can change your dns to fix generally). I don’t understand how this censorship is any different from what we look down upon authoritarian countries. I like the idea of a free and open web
Ilandar@aussie.zone 5 months ago
The scope and nature of the content being censored, I guess. But you’re right that there is the potential of setting a dangerous precedent when taking this approach to online safety regulation. I think in general the saga has highlighted the problematic nature of social media becoming so intertwined with society. There is a real risk for this stuff to be viewed unintentionally, or because it was recommended through an algorithmic feed, and served to a considerably larger number of people than if it was only available on LiveLeak or something back in the day. It’s so difficult to effectively regulate these social media companies now because they have become part of mainstream society and gained so much power as a result. We are essentially just relying on goodwill on the part of the people running them.
unionagainstdhmo@aussie.zone 5 months ago
But in this specific case if they blurred out the content and put a warning: “This post contains graphic content, do you wish to view it?”. Or perhaps we could use AI to give a description so people know what they’re getting into. There’s nothing wrong with that, and I don’t know why that isn’t good enough.
I might sound hypocritical as a mod of a few communities on here who has removed a few comments that don’t meet our standards, but comments on Lemmy aren’t truly removed (unless an admin purges it) and can be viewed in the modlog (or with a client that doesn’t respect the condition when a comment has been removed, there’s still quite a few where this is the case).
Ilandar@aussie.zone 5 months ago
I don’t think warnings are good enough if the content is being delivered automatically into people’s feeds. People are not really thinking rationally when they are doom-scrolling on social media. Not to mention that text descriptions are not always adequate preparation for extreme content, particularly with social media minimum age limits as low and as unenforced as they are.