Comment on Social media: Why algorithms show violence to boys
lvxferre@mander.xyz 3 months ago
It gets worse, when you remember that there’s no dividing line between harmful and healthy content. Some content is always harmful, some is by default healthy, but there’s a huge gradient of content that needs to be consumed in small amounts - not doing it leads to alienation, and doing it too much leads to a cruel worldview.
This is doubly true when dealing with kids and adolescents. They need to know about the world, and that includes the nasty bits; but their worldviews are so malleable that, if all you show them is nasty bits, they normalise it inside their heads.
It’s all about temperance. And yet temperance is exactly the opposite of what those self-reinforcing algorithms do. If you engage too much with content showing nasty shit, the algo won’t show you cats being derps to “balance things out”. No, it’ll show you even more nasty shit.
It gets worse due to profiling, mentioned in the text. Splitting people into groups to dictate what they’re supposed to see leads to the creation of extremism.
In the light of the above, I think that both Kaung and Cai are missing the point.
Kaung believes that children+teens would be better if they stopped using smartphones; sorry but that’s stupid, it’s proposing to throw the baby out with the dirty bathtub water.
Cai on the other hand is proposing nothing but a band-aid. We don’t need companies to listen to us to decide what we should be seeing; we need them to stop altogether deciding what we should be seeing.
ericjmorey@beehaw.org 3 months ago
It’s nice to see that others get it. Unfortunately, neither of us have any immediate influence on the largest social media platforms.
lvxferre@mander.xyz 3 months ago
To make it worse decision makers - regardless of country - are typically old and clueless about “this computer stuff”. As such they literally don’t see the problem.