Someone been playing civ again?
AIs can’t stop recommending nuclear strikes in war game simulations | OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
Submitted 3 weeks ago by remington@beehaw.org to technology@beehaw.org
Comments
dylanmorgan@slrpnk.net 3 weeks ago
The training data contains writing that downplays the negative impact of the Hiroshima and Nagasaki bombings, probably along with a healthy dose of writing from people like Douglas MacArthur and Curtis “Bombs Away” LeMay, evangelists of the tactical use of nuclear weapons and the belief that sufficient bombing would “break” the will of an enemy (despite zero examples of that happening until the use of nukes on Japan.)
ZC3rr0r@piefed.ca 3 weeks ago
Terror bombings don’t work full stop. Even the nuking of Japan didn’t result in the populace giving up, and there’s ample evidence to suggest that it was at the very least the combined threat of the Russians shifting focus to the eastern theatre as well as the nukes that caused Japanese high command to conclude that their current losses would be infeasible to sustain. And even that wasn’t without internal controversy and disagreement.
dylanmorgan@slrpnk.net 3 weeks ago
There’s at least still debate that the nukes significantly impacted the diplomatic process, unlike the firebombing of Tokyo which killed more people and didn’t move the needle on Japan’s commitment to the war at all.
Battle_Masker@lemmy.blahaj.zone 3 weeks ago
We’ve had how many movies about why this isn’t a good idea? Only one I can think of rn is War Games
TehPers@beehaw.org 3 weeks ago
Terminator might be a little more popular.
It seems the only way to win is not to play.
Malgas@beehaw.org 3 weeks ago
Another that comes to mind, Colossus is…well, not this exact scenario, but relevant.
Megaman_EXE@beehaw.org 3 weeks ago
I think the big issue is the personification of AI. They’re not people and dont have the same amount of reasoning as a person.
Ai should really just be used for specific use cases like a tool. A hammer is just for hammering.
I guess the main issue is at that point AI should specifically be used more like a computer application rather than a Swiss army knife. So the way it’s commonly used in its chatbot format isn’t necessarily a good or safe way to implement and use it.
KeenFlame@feddit.nu 3 weeks ago
The ones that do it most are the haters. The mansplaining stochastic parrot bros that have surface level understanding and pushing down closeted ml-curious thoughts by fantasising instead of going to read and learn about the actual operation and underlying formulas. I know this because if they had, they would also, like all researchers are in the field, trying to figure out how the whole fuck they even work just what the entire holy fuck made spatial reasoning artifacts appear if we back propagate and dissect this shit it calculates plus operations by estimating and then using remainders but why the fuck does it work tell me why it can solve novel riddles please i urge you why can we not control it why does it go mad please i urge you to learn
HubertManne@piefed.social 3 weeks ago
its the ultimate trump card! why wouldn’t you use it? If you don’t do stupid they will stupid you.
CanadaPlus@lemmy.sdf.org 3 weeks ago
Well that’s weird. I wonder why they have that specific bias?
t3rmit3@beehaw.org 3 weeks ago
Would not be surprised if it’s trained on the thousands of policy debate “nuclear war terminal impact” arguments on openev.
01189998819991197253@infosec.pub 3 weeks ago
Assassassin@lemmy.dbzer0.com 3 weeks ago
Really happy that the shitheads in charge right now keep trying to shoehorn AI into military ops.
ZC3rr0r@piefed.ca 3 weeks ago
I can see absolutely no way this could not go wrong.
XLE@piefed.social 3 weeks ago
Just hooked a Mac Mini with Clotbot up to the local missile silo. Out to get a matcha at maralago. I told OpenCock to WhatsApp me asking for permission before nuking any towns.