Comment on OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity
Floey@lemm.ee 6 months ago
This fear mongering is just beneficial to Altman. If his product is powerful enough to be a threat to humanity then it is also powerful enough to be capable of many useful things, things it has not proven itself to be capable of. Ironically spreading fear about its capabilities will likely raise investment, so if you actually are afraid of openai somehow arriving at agi that is dangerous then you should really be trying to convince people of its lack of real utility.
tal@lemmy.today 6 months ago
The guy complaining left the company:
I don’t think that he stands to benefit.
He also didn’t say that OpenAI was on the brink of having something like this either.
Floey@lemm.ee 6 months ago
I didn’t think it was a choreographed publicity stunt. I just know Altman has used AI fear in the past to keep people from asking rational questions like “What can this actually do?” He obviously stands to gain from people thinking they are on the verge of agi. And someone looking for a new job in the field also has to gain from it.
As for the software thing, if it’s done by someone it won’t be openai and megacorporations following in its footsteps. They seem insistent at throwing more data (of diminishing quality) and more compute (an impractical amount) at the same style of models hoping they’ll reach some kind of tipping point.