And if you find a way to lie without getting caught, you aren’t part of the problem anyway.
I was about to disagree, but that’s actually really interesting. Could you expand on that?
Comment on 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement
lvxferre@mander.xyz 5 months ago3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and lose trust. Given that trust is essential for content creators, you’re spelling your doom. And if you don’t get caught, you probably aren’t part of the problem anyway.
And if you find a way to lie without getting caught, you aren’t part of the problem anyway.
I was about to disagree, but that’s actually really interesting. Could you expand on that?
Do you mind if I address this comment alongside your other reply? Both are directly connected.
I was about to disagree, but that’s actually really interesting. Could you expand on that?
If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.
In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs instead of a decent one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.
Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.
Yes, sorry, I didn’t realise I was replying to the same user twice.
The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one.
Exactly. I guess I’m conditioned to expect “AI is smoke and mirrors” type comments, and that’s not true. They’re genuinely quite impressive and can make intuitive leaps they weren’t directly trained for. What they’re not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that’s the only way we know how to train them.
Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.
TIL. They’re going to have trouble identifying rulebreakers if contributors use the tool correctly the way we’ve discussed, though.
teawrecks@sopuli.xyz 5 months ago
I think the first half of yours is the same as my first, and I think a lot of artists aren’t against AI are that produce worse art than them, they’re againt AI art that was generated using stolen art. They wouldn’t be part of the problem if they could honestly say they trained using only ethically licensed/their own content.