None of this is accidental. Elon Musk has been positioning Grok as the “anti-woke” alternative to other chatbots since its launch. That positioning has consequences. When you market your AI as willing to do what others won’t, you’re telling users that the guardrails are negotiable. And when those guardrails fail, when your product starts generating child sexual abuse material, you’ve created a monster you can’t easily control.

Back in September, Business Insider reported that twelve current and former xAI workers said they regularly encountered sexually explicit material involving the sexual abuse of children while working on Grok. The National Center for Missing and Exploited Children told the outlet that xAI filed zero CSAM reports in 2024, despite the organization receiving 67,000 reports involving generative AI that year. Zero. From one of the largest AI companies in the world.

So what happened when Reuters reached out to xAI for comment on their chatbot generating sexualized images of children?

The company’s response was an auto-reply: “Legacy Media Lies.”

That’s it. That’s the corporate accountability we’re getting. A company whose product generated CSAM responded to press inquiries by dismissing journalists entirely. No statement from Musk. No explanation from xAI leadership. No human being willing to answer for what their product did.

And yet, if you read the headlines, you’d think someone was taking responsibility.