I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.
But what seems much more likely, given what we’ve seen already, is corporations pushing AI that they know isn’t really capable of what they say it is and everyone going along with it because of money and technological ignorance.
This is much more likely, and you can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what they’re talking about, endless fake reviews and articles. It’s already hurt people, but so far only on a small scale.
But the profitablity of pushing AI early, especially if you’re just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.
That’s what’s scary about it. It isn’t AI itself, it’s AI as a vector for corporate recklessness.
thingsiplay@beehaw.org 4 months ago
How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. It’s just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Let’s call it clickbait talk.
First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isn’t he trying to sell you why AI is great? He follows with:
Ah yes, he does.
LibertyLizard@slrpnk.net 4 months ago
Insider from OpenAI PR department speaks out!
joelfromaus@aussie.zone 4 months ago
Maybe they asked ChatGPT?
MagicShel@programming.dev 4 months ago
ChatGPT says 1-5%, but I told it to give me nothing but a percentage and it gave me a couple of paragraphs like a kid trying to distract from the answer by surrounding it with bullshit. I think it’s onto us…
(I kid. I attribute no sentience or intelligence to ChatGPT.)
eveninghere@beehaw.org 4 months ago
This is a horoscope trick. They can always say AI destroyed humanity.
Trump won in 2016 and there was Cambridge Analytica doing data analysis: AI technology destroyed humanity!
Israel used AI-guided missiles to attack Gaza: AI destroyed humanity!
Whatever. You can point at whatever catastrophe and there is always AI behind because already in 2014 AI is a basic technology.
chicken@lemmy.dbzer0.com 4 months ago
The person who predicted 70% chance of AI doom is Daniel Kokotajlo, who quit OpenAI because of it not taking this seriously enough. The quote you have there is a statement by OpenAI, not by Kokotajlo, this is all explicit in the article. The idea that this guy is motivated by trying to do marketing for OpenAI is just wrong, the article links to some of his extensive commentary where he is advocating for more government oversight specifically of OpenAI and other big companies instead of the favorable regulations that company is pushing for. The idea that his belief in existential risk is disingenuous also doesn’t make sense, it’s clear that he and other people concerned about this take it very seriously.