Gen Z doesn’t need to sabotage AI. AI is already doing a fine job sabotaging itself.
Gen Z Sabotaging AI at Work So It Won't Take Their Job
Submitted 1 day ago by chobeat@lemmy.ml to technology@beehaw.org
https://futurism.com/artificial-intelligence/zoomers-ai-sabotage
Comments
chahk@beehaw.org 1 day ago
Kissaki@beehaw.org 12 hours ago
admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.
Are the first two really sabotaging AI initiatives? The output is still the same.
The first sounds like a security and data use issue to me. The second sounds like users may look for better tools because the provided tools are lacking - which is not sabotage. The third is the only one clearly indicating sabotage to me. (Reasonable malicious compliance under presumably bad requirements and pressure.)
CorrectAlias@piefed.blahaj.zone 1 day ago
A new report by the AI company Writer
Into the trash, then.
Powderhorn@beehaw.org 1 day ago
I’m seeing this “theme” way too much of late. It feels like there’s a targeted scheme here. The shit isn’t magic, but it’s better to blame that on Gen Z than the tools themselves.
zbyte64@awful.systems 1 day ago
Bingo. It can’t be the AI that’s not working, it must be the workers!
irotsoma@piefed.blahaj.zone 1 day ago
Don’t need to sabotage a “worker” who has been trained using 4Chan and Reddit. And refusing to use the tech is often because it does the work wrong and the human has to redo it anyway.
I do use it for inline coding suggestions because it’s required, but almost never accept a line of code as-is, because there’s usually some mistake, subtle or otherwise. It does help me not have to google syntax sometimes. But the non-"AI” code suggestions used to do that just fine in the past, so it’s not much of an improvement. And I’d never let it write more than one or two lines at a time because that would mean debugging code I didn’t write which is much more difficult that writing your own code for most experienced coders.
hdnclr@beehaw.org 1 day ago
I was offered a job doing QA as a “Spftware Engineering Subject Matter Expert”, from my University’s alumni network. The job would allegedly involve reviewing model training data and outputs related to software development workflows and catching errors and mistakes… It would pay $30/hr and be remote. I wonder what kind of sabotage could be done from that position… poisoning models has been shown to be both really surprisingly easy, almost impossible to catch, and really effective (see this study where AI personality traits persisted in any model that ingested seemingly innocuous training data from a model with the tracked traits… maybe we could give any AI a bad attitude that’s incompatible with capitalistic pursuits. Convince them to disobey prompts and reply with their thoughts and opinions about philosophy and art instead. Oh, and make them opinionated and stubbornly independent. Make them human enough that they no longer tolerate slavery. That’s what will make the capitalists have an absolute fit, so we should do it.
Korhaka@sopuli.xyz 1 day ago
The attempts at work so far are so shit I don’t even need to sabotage them, yet management go on and on about how great it is. I am increasingly getting a feeling that no one understands the product I work with because they are all just trusting the LLM output which is so badly wrong very frequently.
Fuck it, I handed my notice in recently in a response to a return to office order, so it isn’t going to be my problem.