MIT, like two years out from a study saying there is no tangible business benefit to implementing AI, just released a study saying it is now capable of taking over more than 10% of jobs. Maybe that’s hyperbolic but you can see that it would require a massssssive amount of cost to make that not be worth it. And we’re still pretty much just starting out.
Jayjader@jlai.lu 1 day ago
I would love to read that study, as going off of your comment I could easily see it being a case of “more than 10% of jobs are bullshit jobs à la David Graeber so having an « AI » do them wouldn’t meaningfully change things” rather than “more than 10% of what can’t be done by previous automation now can be”.
CatsPajamas@lemmy.dbzer0.com 1 day ago
Summarized by Gemini
The study you are referring to was released in late November 2025. It is titled “The Iceberg Index: Measuring Workforce Exposure in the AI Economy.” It was conducted by researchers from MIT and Oak Ridge National Laboratory (ORNL). Here are the key details from the study regarding that “more than ten percent” figure:
…mit.edu/…/rethinking-ais-impact-mit-csail-study-…
Jayjader@jlai.lu 1 day ago
I’ll be honest, that “Iceberg Index” study doesn’t convince me just yet. It’s entirely built off of using LLMs to simulate human beings and the studies they cite to back up the effectiveness of such an approach are in paid journals that I can’t access. I also can’t figure out how exactly they mapped which jobs could be taken over by LLMs other than looking at 13k available “tools” (from MCPs to Zapier to OpenTools) and deciding which of the Bureau of Labor’s 923 listed skills they were capable of covering. Technically, they asked an LLM to look at the tool and decide the skills it covers, but they claim they manually reviewed this LLM’s output so I guess that counts.
from iceberg.mit.edu/report.pdf
Large Population Models is arxiv.org/abs/2507.09901 which mostly references github.com/AgentTorch/AgentTorch, which gives as an example of use the following:
The whole thing perfectly straddles the line between bleeding-edge research and junk science for someone who hasn’t been near academia in 7 years like myself. Most of the procedure looks like they know what they’re doing, but if the entire thing is built on a faulty premise then there’s no guaranteeing any of their results.
In any case, none of the authors for the recent study are listed in that article on the previous study, so this isn’t necessarily a case of MIT as a whole changing it’s tune.
(The recent article also feels like a DOGE-style ploy to curry favor with the current administration and/or AI corporate circuit, but that is a purely vibes-based assessment I have of the tone and language, not a meaningful critique)