Comment on If AI spits out stuff it's been trained on
YungOnions@lemmy.world 1 week ago
Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”
The model hasn’t necessarily been trained with CSAM, rather you can create things called LORAs which help influence the image output of a model so that it’s better at producing very specific content that it may have struggled with before. For example I downloaded some recently that help Stable Diffusion create better images of Battleships from Warhammer 40k. My guess is that criminals are creating their own versions for kiddy porn etc.
OhNoMoreLemmy@lemmy.ml 1 week ago
This is one of those things where both are likely to be true. All webscale datasets have a problem with porn and csam, and it’s like that people wanting to generate csam use their own fine tuned models.
Here’s an example story. …stanford.edu/…/investigation-finds-ai-image-gene… and it’s very likely that this was the tip of the iceberg, and there’s more csam still in these datasets.