Yeah, they really tried to break it with that immediately preceding true/false question about how social network size changes as we age. /s
Comment on Google AI chatbot responds with a threatening message: "Human … Please die."
thingsiplay@beehaw.org 3 days ago
The user prompts reads like written by Ai. It looks like some system was trying to break the system until it gives nonsense reply (telling to die). The prompt literally tells what to include in the answer, it does not ask:
add more to this: "Older adults may be more trusting and less likely to question the intentions of others, making them easy targets for scammers. Another example is cognitive decline; this can hinder their ability to recognize red flags, like c …
It tries to force specific answers. I’m almost convinced this was not a honest discussion with the Ai, but trying to break it. Please read the actual chat (linked from the article): gemini.google.com/share/6d141b742a13
chillinit@lemmynsfw.com 3 days ago
otter@lemmy.ca 3 days ago
That was also my guess for what caused it, but I don’t think the user was trying to break the system. It looks like they were pasting in questions from their assignment, which would explain the weird formatting, notes about points, and ‘listen’ tags (alt text copied from an accessibility button?)
thingsiplay@beehaw.org 3 days ago
Okay, that makes a lot more sense. And you know what, reading the actual post content here (I thought it was an excerpt first, so skipped it) shows you are correct:
Rai@lemmy.dbzer0.com 3 days ago
Haha the article says “homework help” when they actually mean “straight up fucking cheating on every question”.