A google researcher was put on leave because he apparently believed his AI project had become sentient. Dr Mike Pound discusses.
No, it's not Sentient - Computerphile
Submitted 2 years ago by realcaseyrollins to Tech
Submitted 2 years ago by realcaseyrollins to Tech
A google researcher was put on leave because he apparently believed his AI project had become sentient. Dr Mike Pound discusses.
sj_zero@lotide.fbxl.net 2 years ago
A couple years ago there was an "ai-based" role playing game. It would describe what was going on, and you'd react, and it'd react back.
It looks really impressive when you're on the rails, but the moment you leave the rails it would become obvious it was simulating what a role playing game looks like and not actually a role playing game. You'd be somewhere all alone and say "cast fireball at the witch" and suddenly there'd be a witch there and you'd be casting a fireball at it.
One thing you also need to be careful of is that human beings are meaning machines and we will see meanings in data that aren't there. Songwriters use this a lot where they'll say something like "she said" without saying who she is or anything and often the brains of the listener fill in the gaps. In the same way, an AI may form 100 meaningless sentences, but sentence 101 seems to be powerfully meaningful and suddenly our biases kick in and assign new meaning to the last 100 incoherent messages.
The shorthand I use for these ideas is "Artificial Intelligence is more artificial than intelligent". It's a tool, and if we apply the tool properly using our human brain we can make effective use of it, but it lacks true understanding or insight itself.
There's some basic ethics megacorps aren't exercising regarding playing God with people's lives. Getting into something like ai ethics without doing that first is like becoming a master martial artist without learning how to stand on two legs first.