Comment on Baldur's Gate 3 actors reveal the darker side of success fuelled by AI voice cloning
Megaman_EXE@beehaw.org 8 months ago
And we thought identity theft was shitty before. I hope that we’ll have better tools to identify AI voices in the future. In some cases right now I have a hard time telling between an actual person and a faked voice.
DdCno1@beehaw.org 8 months ago
This problem cannot be solved by tools, because you can use these tools to make AI-generated content more realistic (adversarial training).
localhost@beehaw.org 8 months ago
I’d honestly go one step further and say that the problem cannot be fully solved period.
There are limited uses for voice cloning: commercial (voice acting), malicious (impersonation), accessibility (TTS readers), and entertainment (porn, non-commercial voice acting, etc.).
Out of all of these only commercial uses can really be regulated away as corporations tend to be risk averse. Accessibility use is mostly not an issue since it usually doesn’t matter whose voice is being used as long as it’s clear and understandable. Then there’s entertainment. This one is both the most visible and arguably the least likely to disappear. Long story short, convincing enough voice cloning is easy - there are cutting-edge projects for it on github, written by a single person and trained on a single PC, capable of being run locally on average hardware. People are going to keep using it just like they were using photoshop to swap faces and manual audio editing software to mimic voices in the past. We’re probably better off just accepting that this usage is here to stay.
And lastly, malicious usage - in courts, in scam calls, in defamation campaigns, etc. There’s strong incentive for malicious actors to develop and improve these technologies. We should absolutely try to find a way to limit its usage, but this will be eternal cat and mouse game. Our best bet is to minimize how much we trust voice recordings as a society and, for legal stuff, developing some kind of cryptographic signature that would confirm whether or not the recording was taken using a certified device - these are bound to be tampered with, especially in high profile cases, but should hopefully somewhat limit the damage.
Megaman_EXE@beehaw.org 8 months ago
Welp…we’re boned I guess
DdCno1@beehaw.org 8 months ago
The only way to limit the damage is the tedious old-fashioned way: An honest debate, thorough public education, followed by laws and regulations, which are backed up by international treaties. This takes a long time however, the tech is evolving very quickly, too quickly, self-regulation isn’t working and there are lots of bad actors, from pervy individuals to certain nation states (the likes of Russia, Iran and China have used generative AI to manipulate public opinion) which need to be contained.