Interestingly enough, not even making them actually intelligent would be enough to make them liable - because you can’t punish or reward them.
Interestingly enough, not even making them actually intelligent would be enough to make them liable - because you can’t punish or reward them.
TehPers@beehaw.org 1 day ago
Yep! You would need not only an AI superintelligence capable of reflecting and adapting, but legislation which holds liable those superintelligences and grants them the rights and obligations of a human. Because there is no concept of reward of punishment to a LLM, they can never be replacements for people.
lvxferre@mander.xyz 1 day ago
It’s more than that: they’d need to have desires, aversions, goals. That is not automatically granted by intelligence; in our case it’s from our instincts as animals. So perhaps you’d need to actually evolve Darwin style the AGI systems you develop, and that would be way more massive than a single AGI, let alone the “put glue on pizza lol” systems we’re frying the planet for.
Powderhorn@beehaw.org 1 day ago
I mean, corporations are people. How is this less reasonable?
TehPers@beehaw.org 1 day ago
It’s not really (which says more about corporations than anything).
Powderhorn@beehaw.org 1 day ago
I’m reminded of the fairy tale of the two squirrels in the Black Forest. As fall came to pass, they BALLROOM!