I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.
Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.
60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.
Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.
I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.
pezhore@infosec.pub 4 weeks ago
I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.
I can only see this get worse.
null_dot@lemmy.dbzer0.com 4 weeks ago
It’s not just the internet.
Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.
I genuinely believe that the general effectiveness of written communication has regressed.
pezhore@infosec.pub 4 weeks ago
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
based_raven@lemm.ee 4 weeks ago
Yep. My work has pushed AI shit massively. Something like 53% of staff are using it. They’re using it to write reports for them for clients, all sorts. It’s honestly mad.
ButWhatDoesItAllMean@sh.itjust.works 4 weeks ago
Image
MisterFrog@lemmy.world 3 weeks ago
I honestly wonder what these sorts of jobs are. I feel like I have barely any reason to use AI ever in my job.
But this may because I’m not summarising much, if ever
AI can’t think, and how long emails are people writing to every make the effort of asking the AI to write something for you worth it?
By the time you’ve asked it to include everything you wanted, you could have just written the damn email
FauxLiving@lemmy.world 4 weeks ago
The Internet was shit before LLMs
ICastFist@programming.dev 4 weeks ago
It had its fair share of shit and that gradually increased with time, but LLMs are like a whole new level of flooding everything with zero effort
pezhore@infosec.pub 4 weeks ago
I’d say it was weird, not shit. It was hard to find niche sites, but once you did they tended to be super deep into the hobby, sport, movies, or games.
SEO (search engine optimization) was probably the first step down this path, where people would put white text on a white background with hundreds of words that they hoped a search engine would index.