Comment on After OpenAI's blowup, it seems pretty clear that 'AI safety' isn't a real thing

FunderPants@lemmy.ca ⁨7⁩ ⁨months⁩ ago

If the leaks/rumours/reports are true, and project Q* really is ahead in terms of AGI, then the world is in a really … I don’t know, interesting? Scary? Exciting place?

OpenAI uses an interesting definition of AGI. Rather than thinking a out cognition, they say AGI occurs when AI is superior to humans in the majority of economically viable tasks. This definition is , to me, more frightening than the cognition version because it lays bare the intention of developing the AGI, to simply out compete most people. In a world with AGI under this definition, the limiting factor to replacing most of the human workforce becomes access to compute and associated resources (power, chips, etc). So what are people going to do? In a world of AGI as defined here, most people do not have economically viable skills. Which means no way to make money at all. Do we starve? Well, we won’t want to, so the displaced will change jobs at first, competing with tradespeople, and those who work with their hands. Which will devalue that work because, again, most people won’t be able to afford those now over saturated , overly available services to being with. Would we riot? Demand techno-communism? Have a UBI? Descend into a violent hellscape where the poor kill each other for scraps while the rich live in fortresses? Or maybe some bright /terrible future I’m too limited to imagine. Not sure, I’m not sure. If I knew I doubt I’d be posting about it here.

source
Sort:hotnewtop