Comment on Don’t believe the hype: AGI is far from inevitable
Aggravationstation@feddit.uk 1 month ago
Possible or not I don’t think we’ll get to the point of AGI. I’m pretty sure at some point someone will do something monumentally stupid with AI that will wipe out humanity.
drwho@beehaw.org 1 month ago
Like wrecking the biosphere in its persuit.
Aggravationstation@feddit.uk 1 month ago
Maybe. But I have a feeling it’ll be a dumb single mistake that’ll make someone say “ah, shit” just before we’re wiped out.
When the Soviets trained anti-tank dogs in WW2 they did so on tanks that weren’t running to save fuel: “Their deployment revealed some serious problems… In the field, the dogs refused to dive under moving tanks.” https://en.m.wikipedia.org/wiki/Anti-tank_dog
History is littered with these kinds of mistakes. It would only take one military AI with access to autonomous weapons to have a similar issue in it’s training data to potentially kill us all.
princessnorah@lemmy.blahaj.zone 1 month ago
GiveMemes@jlai.lu 1 month ago
Why in God’s name would we put weapons that pose a legitimate threat to the whole of humanity under the control of an ai? I just don’t think this one sounds plausible.