“Lie”
It’s a theory. A plausible but unlikely one. Just like it was unlikely that the first atomic bomb would set the atmosphere on fire - but possible. I think that events with consequences of this magnitude deserves some consideration. I doubt humans are anywhere even near the far end of the intelligence spectrum and only a human is stupid enough to think that something that is would not posses any potenttial danger to us.
lvxferre@mander.xyz 7 months ago
Interesting video. At the core it can be summed up as:
Andromxda@lemmy.dbzer0.com 7 months ago
I think that’s a good idea in general, not just because of AI
oDDmON@lemmy.world 7 months ago
Thanks for the TL;DR!
HopeOfTheGunblade@kbin.social 7 months ago
I've been concerned about AI as x risk for years before big tech had a word to say on the matter. It is both possible for it to be a threat, and for large companies to be trying to take advantage of that.
lvxferre@mander.xyz 7 months ago
Those concerns mostly apply to artificial general intelligence, or “AGI”. What’s being developed is another can of worms entirely, it’s a bunch of generative models. They’re far from intelligent; the concerns associated with them is 1) energy use and 2) human misuse, not that they’re going to go rogue.