According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.
Producing innaccurate technical advice, with a confident tons, at scale.
If it were an employee it would get a formal blame, and then demoted or fired as it continues.
GregorGizeh@lemmy.zip 1 day ago
“Rogue AI” as if it’s some sentient evil thing when its just a llm with too many permissions… This timeline is so dystopian, but simultaneously incredibly lame i hate it.
Hirom@beehaw.org 1 day ago
It shows LLM can do significant harm without the capabilities if an AGI.