• maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    242
    ·
    7 months ago

    The moment word was that Reddit (and now Stackoverflow) were tightening APIs to then sell our conversations to AI was when the game was given away. And I’m sure there were moments or clues before that.

    This was when the “you’re the product if its free” arrangement metastasised into “you’re a data farming serf for a feudal digital overlord whether you pay or not”.

    Google search transitioning from Good search engine for the internet -> Bad search engine serving SEO crap and ads -> Just use our AI and forget about the internet is more of the same. That their search engine is dominated by SEO and Ads is part of it … the internet, IE other people’s content isn’t valuable any more, not with any sovereignty or dignity, least of all the kind envisioned in the ideals of the internet.

    The goal now is to be the new internet, where you can bet your ass that there will not be any Tim Berners-Lee open sourcing this. Instead, the internet that we all made is now a feudal landscape on which we all technically “live” and in which we all technically produce content, but which is now all owned, governed and consumed by big tech for their own profits.


    I recall back around the start of YouTube, which IIRC was the first hype moment for the internet after the dotcom crash, there was talk about what structures would emerge on the internet … whether new structures would be created or whether older economic structures would impose themselves and colonise the space. I wasn’t thinking too hard at the time, but it seemed intuitive to that older structures would at least try very hard to impose themselves.

    But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.

    • classic@fedia.io
      link
      fedilink
      arrow-up
      46
      ·
      7 months ago

      Well that’s a happy note on which to end this day

      (Well written though, thank you)

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      27
      ·
      7 months ago

      Especially coming from Google, who was one of the good guys pushing open standards and interoperability.

      • lanolinoil@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        We ruined the world by painting certain men or groups as bad. The centralization of power is the bad thing. That’s the whole purpose of all Republics as I understand it. Something we used to know and have almost completely forgotten

      • gh0stcassette
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        Eh, open-sourcing is just good business, the only reason every big tech company doesn’t is that loads of executives are stuck in the past. Of course having random people on the internet do labor for you for free is something Google would want. They get the advantage of tens of thousands of extra eyes on their code pointing out potential security vulnerabilities and they can just put all the really shady shit in proprietary blobs like Google Play Services, they’re getting the best of both worlds as far as they’re concerned.

        Large publicly-traded companies do not do anything for the good of anyone but themselves, they are literally Legally Obligated to make the most profitable decisions for themselves at all times. If they’re open-sourcing things it’s to make money, not because they were “good guys”.

    • Hoxton@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      7 months ago

      Well said! I’m still wondering what happens when the enviable ouroboros of AI content referencing AI content referencing AI content makes the whole internet a self perpetuating mess of unreadable content and makes anything of value these companies once gained basically useless.

      Would that eventually result in fresh, actual human created content only coming from social media? I guess clauses about using your likeness will be popping up in TikTok at some point (if they aren’t already)

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        7 months ago

        I dunno, my feeling is that even if the hype dies down we’re not going back. Like a real transition has happened just like when Facebook took off.

        Humans will still be in the loop through their prompts and various other bits and pieces and platforms (Reddit is still huge) … while we may just adjust to the new standard in the same way that many reported an inability to do deep reading after becoming regular internet users.

        • gh0stcassette
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          I think it’ll end up like Facebook (the social media platform, not the company). Eventually you’ll hit model collapse for new models trained off uncurated internet data once a critical portion of all online posts are made by AI, and it’ll become Much more expensive to create quality, up-to-date datasets for new models. Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda and everyone else will move to newer, less enshittified platforms until the cycle repeats.

          Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human. I also think most current fediverse platforms are also marginally more resistant to AI bots because individual servers can have an application process that verifies your humanity, and then defederate from instances that don’t do that.

          Basically anything that can segment the Unceasing Firehose of traffic on the big social media platforms into smaller chunks that can be more effectively moderated, ideally by volunteers because a large tech company would probably just automate moderation and then you’re back at square 1.

          • Hoxton@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Honestly, that sounds like the most realistic outcome. If the history of the internet is anything to go by, the bubble will reach critical mass and not so much pop, as slowly deflate when something else begins to grow and take its place of hype.

          • maegul (he/they)@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Great take.

            Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda

            Ooof … great way of putting it … “brain melting AI propaganda” … I can almost see a sci-fi short film premised on this image … with the main scene being when a normal-ish person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.

            Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human.

            Yep. This is a pretty vital project in the social media space right now that, IMO, isn’t getting enough attention, in part I suspect because a lot of the current movements in alternative social media are driven by millennials and X-gen nostalgic for the internet of 2014 without wanting to make something new. And so the idea of an AI-protected space doesn’t really register in their minds. The problems they’re solving are platform dominance, moderation and lock-in.

            Worthwhile, but in all serious about 10 years too late and after the damage has been done (surely our society would be different if social media didn’t go down the path it did from 2010 onward). Now what’s likely at stake is the enshitification or en-slop-ification (slop = unwanted AI generated garbage) of internet content and the obscuring of quality human-made content, especially those from niche interests. Algorithms started this, which alt-social are combating, which is great.

            But good community building platforms with strong privacy or “enclosing” and AI/Bot protecting mechanisms are needed now. Unfortunately, all of these clones of big-social platforms (lemmy included) are not optimised for community building and fostering. In fact, I’m not sure I see community hosting as a quality in any social media platforms at the moment apart from discord, which says a lot I think. Lemmy’s private and local only communities (on the roadmap apparently) is a start, but still only a modification of the reddit model.

            • afraid_of_zombies@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.

              I see you have met my Fox News watching parents.

              • maegul (he/they)@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                LOL (I haven’t actually met someone like that, in part because I’m not a USian and generally not subject to that sort of type ATM … but I am morbidly curious TBH.

        • Hoxton@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          You’re absolutely right about not going back. Web 3.0 I guess. I want to be optimistic that a distinction between all the garbage and actual useful or real information will be visible to people, but like you said, general tech and media literacy isn’t encouraging, hey?

          Slightly related, but I’ve actually noticed a government awareness campaign where I live about identifying digital scams. Be nice if that could be extended to incorrect or misleading AI content too.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        It should end up self regulating once AI is using AI material. That’s the downfall of the companies not bothering to put very clear identification of AI produced material. It’ll spiral into a hilarious mess.

        • Hoxton@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          I’m legit looking forward to when Google returns completely garbled and unreadable search results, because someone is running an automated Ads campaign that sources another automated campaign and so on, with the only reason it rises to the top is that they put the highest bid amount.

          I doubt Google will do shit about it, but at least the memes will be good!

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Hasn’t it already happened? All culture is derivative, yes all of it. And look at how much of it is awful, yet we navigate fine. I keep hearing stats like every one second YouTube gets 4 hours more content and yet I use YouTube daily. Despite being very very confident that all but a fraction of a percent of what it has is of any value to me.

          Same for books, magazines, news, podcasts, radio programs, music, art, comics, recipes, articles…

          We already live in the post information explosion. Where the same stuff gets churned over and over again. All I am seeing AI doing is speeding this up. Now instead of a million YouTube vids I won’t watch getting added next week it will be ten million.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Tik Tok was banned so it ain’t coming from there. Can’t get universal healthcare but we can make sure to protect kids from the latest dance craze.

      • Gnome Kat
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        Thats a technical issue that likely can be solved. I doubt some feedback loop of training data will be the downfall of AI… The way to stop it is to refuse to use it( lets be real the regulators arnt gana do shit)

    • Rolando@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.

      I didn’t think so either. The funny thing is, Blade Runner, The Matrix, and the whole cyberpunk genre was warning us…

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Yea but this feels quicker than anyone expected. It’s easy to forget, but alpha Go beating the best in the world was shocking at the time and no one saw it coming. We hadn’t sorted out what to do with big monopoly corps yet, we weren’t ready for a whole new technology.

  • Elias Griffin@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    ·
    7 months ago

    Quote from the subtitle of the article

    and you can’t stop it.

    Don’t ever let life-deprived, perspective-bubble wearing, uncompassiontate, power hungry manipulators, “News” people, tell you what you can and cannot do. Doesn’t even pass the smell test.

    My advice, if a Media Outlet tries to Groom you to think that nothing you do matters, don’t ever read it again.

    • fukurthumz420@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 months ago

      god, i love this statement. it’s so true. people have to understand our collective power. even if the only tool we have is a hammer, we can still beat their doors down and crush them with it. all it takes is organization and willingness.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      The implication being that this is the deal that the AI boom is offering, it’s not necessarily an endorsement of that philosophy by the writer.

      • Elias Griffin@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        I don’t care what the implication was, I didn’t read past the slight/insult to my character, morality and intelligence. Who is some MSM empty suit tank to play cognitive narrative shaping with me, absolutely zero.

    • VerticaGG
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      The Atlantic huh? Alright then, The Atlantic, I’ll remember your name and that you published a piece concluding people are powerless to affect change.

      Now (steelman) can I square this with the sentiment from Propaghandi’s “A People’s History of the World”:

      …we’ll have to teach ourselves to analyze and understand
      the systems of thought-control.
      And share it with each other,
      never sayed by brass rings or the threat of penalty.
      I’ll promise you- you promise me- not to sell each
      other out to murderers, to thieves.
      . who’ve manufactured our delusion that you and me
      participate meaningfully in the process of running
      our own lives. Yeah, you can vote however the fuck
      you want, but power still calls all the shots.
      And believe it or not, even if
      (real) democracy broke loose,
      power could/would just “make the economy scream” until we vote responsibly.

      Does this apply here? The song is talking about ballot boxes and corporate explotation on a nation-state imperialist. The topic at hand is to do with the corporate exploitation on a worldwide colonization-of-attention level.

      So i think the way I best square this question, do we have the ability to do something about it, is this:

      Yes. You can do something. Not in the way that popular media depicts the french revolution. Revolution will instead be boring. In fact, IS: Change minds. Change your own mind about whatever forms of domination you have accepted as just. Demand to know who made OpenAI king. While you’re at it, demand to know why it was just for Imperialist campaigns by “superpowers” justified The Contras. It’s a history lesson we can learn from, believe it or not.

      Will you stay down on your knees, or does power still call all the shots?

  • fukurthumz420@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    7 months ago

    our collective time would be better spent destroying capitalism than trying to stop AI. AI is wonderful in the right social system.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      On the other hand, assuming the social system isn’t the right one, hypothetically AI fully realized could make it more unreasonable and more tightly stuck the way it is.

      • TheFriar@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        Not to mention, any other, more just social system wouldn’t be fucking decimating the environment, ultimately hurting the poorer nations first, for money. And AI is accelerating our CO2 output when we need to be drastically cutting it back. This is very much a pacifying tool as we barrel toward oblivion.

          • TheFriar@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 months ago

            https://www.ft.com/content/61bd45d9-2c0f-479a-8b24-605d5e72f1ab

            https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/

            https://hai.stanford.edu/news/ais-carbon-footprint-problem

            When the world needs to be drastically altering our way of life to avert the worst of climate change, these companies are getting away with accelerating their output and generating tons of investment and revenue because “that’s what the market dictates.” Just like with crypto/blockchain a few years ago, adding “AI” into any business pitch/model is basically printing money. So companies are more inclined to incorporate this machine learning tech into their business, and this is all happening while the energy demand for increased usage and the constant “updates” and advancements in the field are gobbling up way more energy than we can honestly afford—and really even conceive of. Because they’re trying to hide this fact, given, yknow, the world fuckin ending. Basically, the market and the entire system of media is encouraging and fawning over this “leap” in tech, when we can’t realistically afford to continue our habits we had before this market even existed. So they are accelerating co2 output, everyone cheers, and we all ride merrily to the edge of our doom.

            It’s capitalism once again destroying us and the planet for profit. And everyone who mindlessly jumps on board, ooh’ing and aww’ing at the stupid new shit they’re doing (while they infringe upon the work of all artists without compensation, driving human creativity out in the job market in favor of saving corporations some scratch by firing their artists and using AI instead…I genuinely can’t really conceive of how people seem so on board with this concept.

            • fukurthumz420@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              7 months ago

              "Cutting-edge technology doesn’t have to harm the planet, and research like this is very important in helping us get concrete numbers about emissions. It will also help people understand that the cloud we think that AI models live on is actually very tangible, says Sasha Luccioni, an AI researcher at Hugging Face who led the work.

              Once we have those numbers, we can start thinking about when using powerful models is actually necessary and when smaller, more nimble models might be more appropriate, she says."

              that’s a shame and i’m not surprised at all to see that corporations are using AI for completely unimportant things.

              But one thing to consider is that AI could also lead to solutions that help save the planet, like solving problems with fusion technology. I still believe in science, and I still believe that capitalism is the root of the problem, not the technology itself.

              • TheFriar@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                I mean, sure, I agree with you. Capitalism is the problem, no question. I would love a job-replacing tech so people could live lives of leisure and art. But…this system is being built for capitalist ends. It’s built by, funding by, and being put in the hands of the exact people causing the problem.

                I agree that in a hypothetical world, machine learning technology could very well help humanity. But the code and money is in the hands of people who aren’t interested in helping humanity.

                I’m no fan of forced labor for basic necessities. And I’m not advocating for that system by any means, but this tech, in this world, will drive the cost of labor down, drive people from the jobs they’ve been forced to rely upon, and it’s literally taking one of the few job fields where people actually got to express their humanity for their wages: art. Creative writing and design/visual art were one of the few fields people actually dreamt of doing. Because it offered us a living for creating. For being human. And that tiny outlet of humanity in the vast contrivance of capitalism is being devoured by this tech.

                That’s just one small part of my distrust of “AI.” But the underlying problem is as I stated first, which is that this tech, existing in this world at this point in time, isn’t going to free us. It’s another tool by the ownership class to cut costs, decimate the environment, and drive profit. While also killing the small little sliver of human creativity that was allowed to exist under capitalism.

                So again, hypothetically, yes, the tech could be a force for good and for human liberation from meaningless work. But it’s actually making our work even more meaningless, while sequestering another huge chunk of power for the ruling class. It would be great if it could reach its potential as a force for good. But given everything, that is not how it’s being implemented.

                • fukurthumz420@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 months ago

                  your points are completely valid, which is why we really need to start banding together to dismantle the ownership class

                  by

                  any

                  means

                  necessary

                  for the sake of humanity (and all other living things on the planet)

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    7 months ago

    I mean, that’s just how it has always worked, this isn’t actually special to AI.

    Tom Hanks does the voice for Woody in Toy Story movies, but, his brother Jim Hanks has a very similar voice, but since he isnt Tom Hanks he commands a lower salary.

    So many video games and whatnot use Jim’s voice for Woody instead to save a bunch of money, and/or because Tom is typically busy filming movies.

    This isn’t an abnormal situation, voice actors constantly have “sound alikes” that impersonate them and get paid literally because they sound similar.

    OpenAI clearly did this.

    It’s hilarious because normally fans are foaming at the mouth if a studio hires a new actor and they sound even a little bit different than the prior actor, and no one bats an eye at studios efforts to try really hard to find a new actor that sounds as close as possible.

    Scarlett declined the offer and now she’s malding that OpenAI went and found some other woman who sounds similar.

    Thems the breaks, that’s an incredibly common thing that happens in voice acting across the board in video games, tv shows, movies, you name it.

    OpenAI almost certainly would have won the court case if they were able to produce who they actually hired and said person could demo that their voice sounds the same as Gippity’s.

    If they did that, Scarlett wouldn’t have a leg to stand on in court, she cant sue someone for having a similar voice to her, lol.

    • Xhieron@lemmy.world
      link
      fedilink
      English
      arrow-up
      57
      ·
      7 months ago

      She sure can’t. Sounds like all OpenAI has to do is produce the voice actor they used.

      So where is she? …

      Right.

    • dwindling7373@feddit.it
      link
      fedilink
      English
      arrow-up
      25
      ·
      edit-2
      7 months ago

      Yes but also no, the whole appeal is tied to her brand (her public image x the character HER), unlike Woody who is an original creation.

      It’s like doing a commercial using a lookalike dressed like the original guy and pretending that’s a completely different actor.

      • Chee_Koala@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 months ago

        I get that she is grappling with identity and it’s not a clear cut case, but if the precedent is set that similar voices (and I didn’t even think it was that similar in this case) are infringement, that would be a pretty big blow to commercial creativity projects.

        Maybe it’s more a brand problem than an infringement problem.

      • Glowstick@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        7 months ago

        I agreed with op, then i read your astute response and now I don’t know which position is correct.

        Thinking it through as i type… If you photoshopped an image of Tom Hanks giving a thumbs up to your product, that would clearly be illegal, but if you hired an exact flawless lookalike impersonator of Tom Hanks and had him pose for a picture with a thumbs up to your product, would that be illegal? I think it might still be illegal, because you purposely hired a lookalike impersonator to gain the benefit of Tom Hanks’ brand.

        I think the law on AI should match what the law says about impersonators. If hiring an indistinguishable celebrity impersonator to use in media is legal, then ai soundalikes should be legal too, and vice versa.

        • lanolinoil@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          7 months ago

          when you get into these nitty gritty copyright/ip arguments you realize it’s all just a house of cards to make capital king and the main ism

        • assassin_aragorn@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          I think what it comes down to is intention. Are you intending to mimic someone else’s likeness without that person’s permission? That’s wrong. But if you just like someone’s voice and want to use them, and they happen to have a similar likeness, that’s fine.

          Where OpenAI gloriously fucked up is asking Johansson first. If they hadn’t, they would have plausible deniability that they just liked the voice actor’s voice. If it reminds them of Johansson, that’s even fine. What’s wrong is that they specifically wanted her likeness, even after she turned them down.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      7 months ago

      The difference is that apparently they asked ScarJo first and she said no. When they ask Tom Hanks (or really his agent, I assume) the answer is “he’s too busy with movies, try Jim”.

          • gaylord_fartmaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Having a talking woman in your phone is not stealing Scarlet Johansson’s likeness, even if they sound somewhat similar. US copyright law is already ridiculous, and you want to make it even more bullshit?

            By that logic her role in Her was already stealing the voice actor for Siri’s likeness, and she should have sued for that too.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          If you don’t own your image what do you own?

          Also you know scale. There is a difference between an Elvis impersonation in Vegas vs a huge ass corporation.

          • gaylord_fartmaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            You own the pile of money you earned for the role you played in someone else’s creative project.

            This isn’t back to the future 2 making a Crispin Glover face mask and putting it on an extra, its using a woman for a voice acting role for an AI speaking from your phone, and somehow that’s stealing from a movie with the same concept, but not stealing from the actual phone AIs voiced by women that existed before the movie.

            • afraid_of_zombies@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              How would you feel if I made wheelbarrows of money off your face or voice without your consent and not paying you a penny? What about your family, got a relatives you care about who would look great in my AI generated porno?

              The world is schizophrenic about this. On one hand we know that data is king and knowing about a person and having access to what they produce is a super important very lucrative field. The biggest companies on earth buy and sell data about people. On the other hand we argue that your image and data has no value and anyone can do what they want with it.

              • gaylord_fartmaster@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                Then I’d have grounds to sue you for stealing my likeness, just like Crispin Glover did in the example I just gave.

                Are you under the impression that’s what happened here? It isn’t. The voice is clearly not Scarlet Johansson’s, and she doesn’t have any kind of ownership over the concept of an AI in your phone using an upbeat woman’s voice to speak to you.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 months ago

      Well, in the “soundalike” situation you describe people were getting paid to voice things. Now it’s just an AI model that’s not getting paid and the people that made the model probably got paid even less than a soundalike voice actor would. It’s just more money going to the top.

    • athairmor@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Scarlett actually would have a good case if she can show the court that people think it’s her. Tom Waits won a case against Frito Lay for “voice misappropriation” when they had someone imitate his voice for a commercial.

    • PrincessLeiasCat@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Wouldn’t the difference here wrt Tom/Woody be that Tom had already played the role before so there is some expectation that a similar voice would be used for future versions of Woody if Tom wasn’t available?

      Serious question, I never thought about the point you made so now I’m curious.

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    58
    ·
    7 months ago

    “We need you to reconsider… because we already did it and we’re just looking for your stamp of approval after the fact.”

  • Alpha71@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    7 months ago

    “Yeah, let’s go up against the woman who sued Disney and won What could go wrong!?”

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    7 months ago

    The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.

    It’s just so fitting that microsoft is the company most fervently wallowing in it.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 months ago

    I hate that I have to keep saying this- No one seems to be talking about the fact that by giving their AI a human-like voice with simulated emotions, it inherently makes it seem more trustworthy and will get more people to believe its hallucinations are true. And then there will be the people convinced it’s really alive. This is fucking dangerous.

  • Rolando@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 months ago

    OpenAI should have given some money to the people who own the movie “Her”. Then they could have claimed they were just mimicking the character.

    • k_rol@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 months ago

      Well it does have some resemblance but other people have voices like her. Are they not allowed to use their voice anymore?

      Edit: I guess not

    • BertramDitore@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      7 months ago

      Knowing people like him, he would probably take the obvious literary warnings from a book like that and use them as inspiration for how to build an even more dystopian nightmare.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        Which this very story proves. The AI voice that they generated was specifically based on “Her”, a movie about a guy who falls in love with an AI voice assistant. I haven’t seen the movie, but I’m going out on a limb to guess this is another “don’t make the torment vortex” situation.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          7 months ago

          The movie is actually pretty non-dystopian and kind of sweet. It’s basically a romcom, just one with a very creative premise.

  • Cringe2793@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    Scarlett Johansson is a troublemaker. “Sounds eerily similar”. It’s not like she has such a unique voice after all.