Comment on 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement
teawrecks@sopuli.xyz 5 months ago
So this could go one of two ways, I think:
- the “no AI” seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely.
Or 2) getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
Melody@lemmy.one 5 months ago
lvxferre@mander.xyz 5 months ago
3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and lose trust. Given that trust is essential for content creators, you’re spelling your doom. And if you don’t get caught, you probably aren’t part of the problem anyway.
teawrecks@sopuli.xyz 5 months ago
I think the first half of yours is the same as my first, and I think a lot of artists aren’t against AI are that produce worse art than them, they’re againt AI art that was generated using stolen art. They wouldn’t be part of the problem if they could honestly say they trained using only ethically licensed/their own content.
CanadaPlus@lemmy.sdf.org 5 months ago
I was about to disagree, but that’s actually really interesting. Could you expand on that?
lvxferre@mander.xyz 5 months ago
Do you mind if I address this comment alongside your other reply? Both are directly connected.
If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.
In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs instead of a decent one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.
Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.
CanadaPlus@lemmy.sdf.org 5 months ago
Yes, sorry, I didn’t realise I was replying to the same user twice.
Exactly. I guess I’m conditioned to expect “AI is smoke and mirrors” type comments, and that’s not true. They’re genuinely quite impressive and can make intuitive leaps they weren’t directly trained for. What they’re not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that’s the only way we know how to train them.
TIL. They’re going to have trouble identifying rulebreakers if contributors use the tool correctly the way we’ve discussed, though.