It’s ultimately frustrating to me that I suspect AI here. There are weird inconsistencies.
But, come on.
It has a lot of potential
Really? That’s what everyone says about their toddler while it pukes.
tal@lemmy.today 1 day ago
Why is so much coverage of “AI” devoted to this belief that we’ve never had automation before (and that management even really wants it)?
I’m going to set aside the question of whether any given company or a given timeframe or a given technology in particular is effective. I don’t really think that that’s what you’re aiming to address.
If it just comes down to “Why is AI special as a form of automation? Automation isn’t new!”, I think I’d give two reasons:
It’s a generalized form of automation
Automating a lot of farm labor via mechanization was a big deal, but it mostly contributed to, well, farming. It didn’t directly result in automating a lot of manufacturing or something like that.
That isn’t to say that we’ve never had technologies that offered efficiency improvements across a wide range of industries. Electric lighting, I think, might be a pretty good example of one. But technologies that do that are not that common.
kagis
en.wikipedia.org/…/Productivity-improving_technol…
This has some examples. Most of those aren’t all that generalized. They do list electric lighting in there. The integrated chip is in there. Improved transportation. But other things, like mining machines, are not.
It has a lot of potential
If one can go produce increasingly-sophisticated AIs — and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations — there’s a pathway to automating darn near everything that humans do today using that technology. Electrical lighting could clearly help productivity, but it can only take things so far.
It’s ultimately frustrating to me that I suspect AI here. There are weird inconsistencies.
But, come on.
Really? That’s what everyone says about their toddler while it pukes.
and let's assume, for the sake of discussion, that we don't run into any fundamental limitations
We already know there are massive fundamental limitations. All of the big name AI companies are all in on LLMs which can't do anything that hasn't been done before, unless it is just arbitrarily outputting something randomly mashed together which is not what to do for anything important. It is a dead end without humans doing things it can copy. When a new coding language is developed, it can't use it until lots and lots of people have used it to suck up their code to vomit forth.
LLMs, which is what all of the general purpose AIs are, cannot be a long term solution to anything unless we are just pausing technology and society to whenever it can handle 'everything'. LLMs have already peaked and that is supposedly the road to general AI.
TehPers@beehaw.org 1 day ago
There is a fundamental limitation of all LLMs that prevents it from doing as much as you might think, regardless of how accurate they are (and they are not):
LLMs cannot take liability. When they make mistakes, they cannot take responsibility for those mistakes. The person who used the LLM will always be liable instead.
So any automation as a result of LLMs removing jobs will end up punting that liability to the next person up the chain. Management will literally have nobody to blame but themselves, and that’s their worst nightmare.
Anyway, this is of course assuming capabilities that don’t exist.
lvxferre@mander.xyz 1 day ago
Interestingly enough, not even making them actually intelligent would be enough to make them liable - because you can’t punish or reward them.
TehPers@beehaw.org 1 day ago
Yep! You would need not only an AI superintelligence capable of reflecting and adapting, but legislation which holds liable those superintelligences and grants them the rights and obligations of a human. Because there is no concept of reward of punishment to a LLM, they can never be replacements for people.
lvxferre@mander.xyz 1 day ago
It’s more than that: they’d need to have desires, aversions, goals. That is not automatically granted by intelligence; in our case it’s from our instincts as animals. So perhaps you’d need to actually evolve Darwin style the AGI systems you develop, and that would be way more massive than a single AGI, let alone the “put glue on pizza lol” systems we’re frying the planet for.
Powderhorn@beehaw.org 1 day ago
I mean, corporations are people. How is this less reasonable?