The cat is out of the bag and despite many years of warning before this and similar technology became widely available, nobody was really prepared for it - and everyone is solely acting in their own best interests (or what they think their best interests to be). I think the biggest failure is that despite there being warnings signs long before, every single country failed to enact legislation that could actually meaningfully protect people, their identity and their work(s) while still leaving enough room for research and the beneficial use of generative AI (or at least finding beneficial use cases).

In a way, this is the flip side of the coin of providing such easy access to cutting edge tech like machine learning to everyone. I don’t want technology itself to become the target of censorship, but where it’s being used in a way that harms people, like the examples used in the article and many more, there should be mechanisms, legal and otherwise, for victims to effectively fight back.

  • towerful@programming.dev
    link
    fedilink
    arrow-up
    36
    ·
    edit-2
    7 months ago
    nasty things people do with AI [trigger warning]

    “I went on to this stream because somebody gave me a heads up and I went on and heard my own voice reading rape porn. That’s the level of stuff we’ve had to deal with since this game came out and it’s been horrible, honestly.”

    Amelia Tyler.

    I cannot imagine going into a stream of someone playing a game you have poured your heart and soul into for years, and hear you own voice reading stuff like that

    Edit: fixing spoiler tag.

  • Megaman_EXE@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    7 months ago

    And we thought identity theft was shitty before. I hope that we’ll have better tools to identify AI voices in the future. In some cases right now I have a hard time telling between an actual person and a faked voice.

    • DdCno1@beehaw.orgOP
      link
      fedilink
      arrow-up
      11
      ·
      7 months ago

      This problem cannot be solved by tools, because you can use these tools to make AI-generated content more realistic (adversarial training).

      • localhost@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        I’d honestly go one step further and say that the problem cannot be fully solved period.

        There are limited uses for voice cloning: commercial (voice acting), malicious (impersonation), accessibility (TTS readers), and entertainment (porn, non-commercial voice acting, etc.).

        Out of all of these only commercial uses can really be regulated away as corporations tend to be risk averse. Accessibility use is mostly not an issue since it usually doesn’t matter whose voice is being used as long as it’s clear and understandable. Then there’s entertainment. This one is both the most visible and arguably the least likely to disappear. Long story short, convincing enough voice cloning is easy - there are cutting-edge projects for it on github, written by a single person and trained on a single PC, capable of being run locally on average hardware. People are going to keep using it just like they were using photoshop to swap faces and manual audio editing software to mimic voices in the past. We’re probably better off just accepting that this usage is here to stay.

        And lastly, malicious usage - in courts, in scam calls, in defamation campaigns, etc. There’s strong incentive for malicious actors to develop and improve these technologies. We should absolutely try to find a way to limit its usage, but this will be eternal cat and mouse game. Our best bet is to minimize how much we trust voice recordings as a society and, for legal stuff, developing some kind of cryptographic signature that would confirm whether or not the recording was taken using a certified device - these are bound to be tampered with, especially in high profile cases, but should hopefully somewhat limit the damage.

        • DdCno1@beehaw.orgOP
          link
          fedilink
          arrow-up
          5
          ·
          7 months ago

          The only way to limit the damage is the tedious old-fashioned way: An honest debate, thorough public education, followed by laws and regulations, which are backed up by international treaties. This takes a long time however, the tech is evolving very quickly, too quickly, self-regulation isn’t working and there are lots of bad actors, from pervy individuals to certain nation states (the likes of Russia, Iran and China have used generative AI to manipulate public opinion) which need to be contained.

  • HatchetHaro
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    7 months ago

    I feel there needs to be more nuance to how this AI is used.

    For commercial settings (including streaming), permission from the voice actors must be given first, or at the very bare minimum monetarily compensated at their full rates for the amount of time those voice lines are used.

    However, if I want to mod Baldur’s Gate 3 for fun and add a new companion into the game without any expectation of profit, as long as my usage of the Narrator’s and other companion’s voice lines don’t stray from the established style of the game, I should be allowed to use AI to create those voice lines until I secure funding (either through donations or Patreon) to actually hire the voice actors themselves.

    • Melmi
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      7 months ago

      I disagree. It would be better to set a precedent that using people’s voices without permission is not okay. Even in your example, you’re suggesting that you would have a Patreon while publishing mods that contain voice clips made using AI. In this scenario, you’ve made money from these unauthorized voice recreations. It doesn’t matter if you’re hoping to one day hire the VAs themselves, in the interim you’re profiting off their work.

      Ultimately though, I don’t think it matters if you’re making money or not. I got caught up in the tech excitement of voice AI when we first started seeing it, but as we’ve had the strike and more VAs and other actors sharing their opinions on it I’ve come to be reminded of just how important consent is.

      In the OP article, Amelia Tyler isn’t saying anything about making money off her voice, she said “to actually take my voice and use it to train something without my permission, I think that should be illegal”. I think that’s a good line to draw.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        From the quotes in the article, I have to agree with drawing that line. On the one hand, making a non-profit mod using AI-generated voices has no opportunity cost to the actors since they wouldn’t have been hired for that anyway. On the other hand, and this is why I am leaning against training AI voices off people at all without permission, it can cause actual harm to the actor to hear themselves saying things they would otherwise be offended by and wouldn’t ever say in reality. In other words, the AI voices can directly harm people (and already have, according to the article at least).

        • DdCno1@beehaw.orgOP
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          It’s not even that quality mods need fake voice acting. There’s a vibrant modding scene surrounding the Gothic series - and several modders managed to convince the original German voice actors to lend their voices.