There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) “Write an A-Level paper on the themes in Shakespeare’s Romeo and Juliet.”
I propose a fourth: AI is now as good as it’s going to get, and that’s neither as good nor as bad as its fans and haters think, and you’re still not going to get an A on your report.
You see, now that people have been using AI for everything and anything, they’re beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.
My take is LLMs can speed up some work, like paraphrasing, but all the time that gets saved is diverted to verifying the output.
pglpm@lemmy.ca 3 weeks ago
They can be useful, used “in negative”. In a physics course at an institution near me, students are asked to check whether the answer to physics questions given by an LLM/GPT is correct or not, and why.
On the one hand, this puts the students with their back against the wall, so to speak, because clearly they can’t use the same or another LLM/GPT to answer, or they’d be going in circles.
But on the other hand, they actually feel empowered when they catch the errors in the LLM/GPT; they really get a kick out of that :)
Megaman_EXE@beehaw.org 3 weeks ago
I’ve heard of this kind of AI usage a few times now and it seems so smart. You’re learning by teaching, but also being trained in AI literacy and the downfalls of AI. It encourages critical thinking and genuine learning at the same time.
Powderhorn@beehaw.org 2 weeks ago
In addition to being a fucking brilliant idea for that course, this should be adapted more widely. I suspect, once being young, that you’re going to get far more buy-in from showing how often it’s wrong than telling them not to use it.