Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Softbank plans to cancel out angry customer voices using AI

⁨58⁩ ⁨likes⁩

Submitted ⁨⁨10⁩ ⁨months⁩ ago⁩ by ⁨sabreW4K3@lazysoci.al⁩ to ⁨technology@beehaw.org⁩

https://arstechnica.com/information-technology/2024/06/new-emotion-canceling-ai-tech-aims-to-shield-call-workers-from-angry-customers/

source

Comments

Sort:hotnewtop
  • kibiz0r@midwest.social ⁨10⁩ ⁨months⁩ ago

    Interacting with people whose tone doesn’t match their words may induce anxiety as well.

    Have they actually proven this is a good idea, or is this a “so preoccupied with whether or not they could” scenario?

    source
    • sabreW4K3@lazysoci.al ⁨10⁩ ⁨months⁩ ago

      It’s probably the Jurassic Park effect

      source
  • Nath@aussie.zone ⁨10⁩ ⁨months⁩ ago

    The biggest problem I see with this is the scenario where calls are recorded. They’re recorded in case we hit a “he said, she said” scenario. If some issue were to be escalated as far as a courtroom, the value of the recording to the business is greatly diminished.

    Even if the words the call agent gets are 100% verbatim, a lawyer can easily argue that a significant percentage of the message is in tone of voice. If that’s lost and the agent misses a nuance of the customer’s intent, they’ll have a solid case against the business.

    source
    • sneezycat@sopuli.xyz ⁨10⁩ ⁨months⁩ ago

      I see no problem: they can record the original call and postprocess it with AI live for the operators. The recordings would be the original audio.

      source
      • geissi@feddit.de ⁨10⁩ ⁨months⁩ ago

        Besides providing verbatim records of who said what, there is a second can of worms in forming any sort of binding agreement if the two sides of the agreement are having two different conversations.

        I think this is what the part about the missed nuance means.

        source
  • Cybrpwca@beehaw.org ⁨10⁩ ⁨months⁩ ago

    I think I get what the article is saying, but all I can imagine is Siri calmly reading to me the vilest insults ever written.

    source
  • perishthethought@lemm.ee ⁨10⁩ ⁨months⁩ ago

    Am I crazy or is 10,000 samples nowhere near enough for training people’s voices?

    source
    • eveninghere@beehaw.org ⁨10⁩ ⁨months⁩ ago

      If you have pre-trained model or a classical voice matching algorithm as the basis, few samples might suffice.

      source
    • Kissaki@beehaw.org ⁨10⁩ ⁨months⁩ ago

      I don’t think it seems like too few samples for it to work.

      What they train for is rather specific. To identify anger and hostility characteristics, and adjust pitch and inflection.

      Dunno if you meant it like that when you said “training people’s voices”, but they’re not replicating voices or interpreting meaning.

      learned to recognize and modify the vocal characteristics associated with anger and hostility. When a customer speaks to a call center operator, the model processes the incoming audio and adjusts the pitch and inflection of the customer’s voice to make it sound calmer and less threatening.

      source
  • blindsight@beehaw.org ⁨10⁩ ⁨months⁩ ago

    This seems like it might work really well. We’ve evolved to be social creatures, and internalizing the emotions of others is literally naked into our DNA (mirror neurons), so filtering out the emotional “noise” from customers seems, to me, like a brilliant way to improve the working conditions for call centre workers.

    It’s not like you can’t also tell the emotional tone of the caller based on the words they’re saying, and the call centre employees will know that voices are being changed.

    Also, I’m not so sure about reporting anonymous Redditor comments as the basis for journalism. I know why it’s done, but I’d rather hear what a trained psychologist has to say about this, y’know?

    source
  • bitwolf@lemmy.one ⁨10⁩ ⁨months⁩ ago

    Dang, swearing was one of my strategies to get the bot to forward me to a representative

    source
  • Xirup@yiffit.net ⁨10⁩ ⁨months⁩ ago

    In my country, 99% of the time you contact technical support, a poorly made bot responds (actually it is a while loop) with ambiguous and pre-written answers, and the only way to talk to a human is directly by going to the branch in question, so nothing to worry about that here.

    source
    • Kissaki@beehaw.org ⁨10⁩ ⁨months⁩ ago

      So what you’re saying is that we need AI do interface in-store as well? /s

      source
  • autotldr@lemmings.world [bot] ⁨10⁩ ⁨months⁩ ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    According to a report from the Japanese news site The Asahi Shimbun, SoftBank’s project relies on an AI model to alter the tone and pitch of a customer’s voice in real-time during a phone call. SoftBank’s developers, led by employee Toshiyuki Nakatani, trained the system using a dataset of over 10,000 voice samples, which were performed by 10 Japanese actors expressing more than 100 phrases with various emotions, including yelling and accusatory tones. By analyzing the voice samples, SoftBank’s AI model has reportedly learned to recognize and modify the vocal characteristics associated with anger and hostility. In a Reddit thread on Softbank’s AI plans, call center operators from other regions related many stories about the stress of dealing with customer harassment. Harassment of call center workers is a very real problem, but given the introduction of AI as a possible solution, some people wonder whether it’s a good idea to essentially filter emotional reality on demand through voice synthesis. By reducing the psychological burden on call center operators, SoftBank says it hopes to create a safer work environment that enables employees to provide even better services to customers. — Saved 78% of original text.

    source