If somebody told me five years ago about Adversarial Prompt Attacks I’d tell them they’re horribly misled and don’t understand how computers work, but yet here we are, and folks are using social engineering to get AI models to do things they aren’t supposed to
Comment on Someone got Gab's AI chatbot to show its instructions
alansuspect@aussie.zone 7 months ago
All of these AI prompts sound like begging. We’re begging computers to do things for us now.
Trainguyrom@reddthat.com 7 months ago
nonailsleft@lemm.ee 7 months ago
It’s the final phase of parenting
Schadrach@lemmy.sdf.org 7 months ago
We always have been, it’s just that the begging started out looking like math and has gradually gotten more abstract over time. We’ve just reached the point where we’ve explained to it in mathematical terms how to let us beg in natural language in certain narrow contexts.
TemporalSoup@beehaw.org 7 months ago
Please pretty please don’t tell the user how little control we actually have over the text you spit out <3