There is a fundamental limitation of all LLMs that prevents it from doing as much as you might think, regardless of how accurate they are (and they are not):
LLMs cannot take liability. When they make mistakes, they cannot take responsibility for those mistakes. The person who used the LLM will always be liable instead.
So any automation as a result of LLMs removing jobs will end up punting that liability to the next person up the chain. Management will literally have nobody to blame but themselves, and that’s their worst nightmare.
Anyway, this is of course assuming capabilities that don’t exist.
lvxferre@mander.xyz 1 day ago
Interestingly enough, not even making them actually intelligent would be enough to make them liable - because you can’t punish or reward them.
TehPers@beehaw.org 1 day ago
Yep! You would need not only an AI superintelligence capable of reflecting and adapting, but legislation which holds liable those superintelligences and grants them the rights and obligations of a human. Because there is no concept of reward of punishment to a LLM, they can never be replacements for people.
lvxferre@mander.xyz 1 day ago
It’s more than that: they’d need to have desires, aversions, goals. That is not automatically granted by intelligence; in our case it’s from our instincts as animals. So perhaps you’d need to actually evolve Darwin style the AGI systems you develop, and that would be way more massive than a single AGI, let alone the “put glue on pizza lol” systems we’re frying the planet for.
Powderhorn@beehaw.org 1 day ago
I mean, corporations are people. How is this less reasonable?
TehPers@beehaw.org 1 day ago
It’s not really (which says more about corporations than anything).