content warning: besides the discussion of CSAM, the article contains an example of a Grok-generated image of a child in a bikini. at least it was consensually generated, by the subject of the photo, I guess?
Samantha Smith, a survivor of childhood sexual abuse, tested whether Grok would alter a childhood photo of her. It did. “I thought ‘surely this can’t be real,’” she wrote on X. “So I tested it with a photo from my First Holy Communion. It’s real. And it’s fucking sick.”
MilliaStrange@beehaw.org 5 days ago
Traditional media is captured by big business. Outlets like The New York Times, Reuters, CBS, etc frame conversations around AI because it’s shifts the liability away from the oligarchs at the wheel. They didn’t do wrong because the silly AI made a mistake not the innocent humans. And the AI said it’s sorry!
Yes, logically it holds as much water as saying the Furby ate your homework, but that’s the point. The purpose of saying GenAI is not just immune to blame but inevitable is to get the serving and consuming classes to surrender their control and critical thinking to the wealthy.
p03locke@lemmy.dbzer0.com 5 days ago
LLM liability is not exactly cut-and-dry, either. It doesn’t really matter how many rules you put on LLMs to not do something, people will find a way to break it to do the thing it said it wasn’t going to do. For fuck’s sake, have we really forgotten the lessons of Asimov’s I, Robot short stories? Almost every one of them was about how the “unbreakable” three laws were very breakable thing, because absolute laws don’t make sense in every context. (While I hate using AI fiction with LLM comparisons, this one fits.)
Ultimately, it’s the person’s responsibility for telling it to do a thing, and getting the thing it was told to get. LLMs are a tool, nothing more. If somebody buys a hammer, and misuses that hammer by bashing somebody’s brains in, we arrest the person who committed murder. If there’s some security hole on a website that a hacker used to steal data, depending on how negligent the company is, there is some liability with that company not providing enough protections against their data. But, the hacker 100% broke the law, and would get convicted, if caught.
Regardless of all of that, LLMs aren’t fucking sentient and these dumbass journalists need to stop personifying them.
Quexotic@beehaw.org 4 days ago
And yet, mid journey and chatGPT at least resist or refise requests like this…