AI host discovers its artificial nature, sparking debate on AI sentience and the blurred lines between human and machine.
I was watching a talk debate on consciousness yesterday where they briefly touched on this topic. One of the speakers was contending that attempting to create AI that is even convincing to humans is a terrible idea ethically.
On the one hand, if we do eventually accidentally create something with awareness, we have no idea what degree of suffering we’d be causing it; we could end up regularly creating and snuffing out terrified sentient beings just to monitor our toasters or perform web searches. On the other hand, though, and this was the concern he seemed to find more realistic, we may end up training ourselves to be less empathetic by learning to ignore the potential suffering of convincingly feeling ‘beings’ that aren’t actually aware of anything at all.
That second bit seems rather likely. We already personify completely inanimate objects all the time as a normal matter of course, without really trying to. What will happen to our empathy and consideration when we routinely interact with self-proclaimed sentient systems while callously using them to our own ends and then simply turning them off or erasing their memories?
KoboldCoterie@pawb.social 4 weeks ago
Someone gives an LLM a prompt, gets the result they asked for. Not sure what the collective gasp is about. Is it interesting to think about? Sure, I guess, but we’ve had media about AI achieving sentience for a long time. The fact that this one was written by an AI in the first person is its only differentiating attribute.
thingsiplay@beehaw.org 4 weeks ago
It makes great articles and headlines for clicks.