People Are Increasingly Worried AI Will Make Daily Life Worse::A Pew survey finds that a majority of Americans are more concerned than excited about the impact of artificial intelligence—adding weight to calls for more regulation.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    How can having more tools to solve problems make things worse? I can’t think of any problem in my life that more tools and methods would work against solving it.

    • FireTower@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      “[Eli] Whitney believed that his cotton gin would reduce the demand for enslaved labor and would help hasten the end of southern slavery. Paradoxically, the cotton gin, a labor-saving device, helped preserve and prolong slavery in the United States for another 70 years.”

    • Sharkwellington@lemmy.one
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      How can having more tools to solve problems make things worse?

      Depends on who’s using it and for what purpose.

    • captainlezbian@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      What problems will get solved? “Our ads aren’t effective enough? We have to pay people to do things when we could be putting it into profits? We’re charging less rent than we think people will pay, but we don’t know how much? People have gotten savvy to my latest scam.”

      The capital holding class will be the ones using ai to their benefit. The dishonest will join them. We may get some concessions here and there, but they own them.

        • captainlezbian@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Not really. It hallucinates so much I don’t use it for factual information. It has massive glaring issues in applications like driverless cars. I suppose that applications like a driverless train would be nice but it’s not something I expect anytime soon. I suspect I’ll be told to like it when it tries to get me to consume more.

          Maybe better ai in video games will be nice.

          Maybe I’ve just become a cranky old lady, but while I can acknowledge actual theoretical value in it when I hear ai hype it feels like listening to crypto bros at worst and at best like listening to an executive telling me I need to implement lean manufacturing and plugging their ears when I want to discuss the costs and risks.

            • wizardbeard@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              If there was evidence AI was heading that direction at all, that direction was where society wanted to move AI to, and that there was the understanding we absolutely aren’t there yet… I’d be significantly more optimistic.

              My problem is that currently, Machine Learning and Expert Systems are being implemented quietly by a number of companies to at best to improve their own commercial offerings and at worst to cut their human staffed support teams to ribbons. Nearly everyone can relate to frustrations of seeking support with an automated system instead of a human. Those situations have continued to get worse, instead of better, as this tech has grown.

              Additionally, thanks to how convincing LLMs are at appearing intelligent, they’ve become a fad rather than being evaluated and appreciated for what they actually are. There are countless startups now who are just trying to cash in on the hype by using the ChatGPT api to offer products that just shove GPT at all sorts of entirely unsuitable use cases.

              Lastly, there are a good deal of issues with the currently most popular AI tech, LLMs, that the industry appears to have no intention of attempting to address in good faith. The complete disdain for copyright, IP, or even fair use when it comes to the data the models have been trained on. The recent articles stating that in order to remove material from a dataset would require effectively rebuilding the LLM. The lack of methodology to get true sources for the data used in responses, lack of reproducability of responses, lack of any auditability of these systems because that would jeapordize the “secret sauce” or is just simply impossible on a technical level. And when most people discuss this they get shouted down by the “true believers” as just not understanding the technology rather than any attempt at discussion in good faith. If you have concerns you’re either stupid or against technological advancement. Don’t you see all the good this could potentially do in the future but it it isn’t doing yet?


              I would love for the type of trustworthy, helpful digital assistant it sounds like you’re describing. I’ve wanted that technology for well over a decade. We’re just not there yet.

            • captainlezbian@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              That sounds really nice and we get to the root cause of my issue here: I don’t think that that is what will happen. I’m not saying to ban the stuff or anything but when I see how it’s being sold to the investors I’m not seeing reasonable and achievable plans of action that benefit everyone. I’m seeing gimmicks, ads, and moonshots. All while the dishonest are getting a lot out of it. I’m seeing it at its most effective being a means to increase the power of the capital holding class because that’s who’s investing in it and I don’t think that training such things will get cheaper.

              And I expect them to improve yes, but I’m also concerned with methodological failures. And I’m not saying that it’ll never make life better, but right now in 2023 I’m not impressed by what I’m seeing. And that’s before I get into the realm of the tendency for trends like this to blind policy makers and business leaders. Hyper loop was sold as being for autonomous vehicles and specifically made to not be cheaply convertible to a known better solution. The whole fucking cloud computing craze comes to mind as well.

              I will cede one thing here though. I do think it has a lot of room for use as one of many engineering tools to help with the design process. Being able to directly compare to known optimization methods is always going to be useful and if it can automatically plug a layout or process into a model it would be nice. Idk if I expect that to happen as well as anyone seems to think though.

              I guess I just don’t trust the tech industry anymore. When I see something like LLMs it seems gimmicky as hell and a lot of early adoption is either minor or harmful. I see driverless cars getting priority over public transit over and over despite the fact that they’ve been 5 years away since I was a kid. I see people talking about using AI to help the fight against climate change from the same people who won’t quit meat. Meanwhile surveillance increases, wages stay stagnant, and the world keeps getting hotter. Contrary to how I sound I love technology. I’m an engineer for a reason. But there’s just so many reasons to feel skeptical of it. So yeah enjoy your hype. If it winds up useful for someone like me I’ll try it. But I’m not buying into the hype and I’ll be skeptical of it until I start seeing actual results.

        • captainlezbian@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          You aren’t the only one with access to these tools. Yeah if I and I alone had ai that wouldn’t be bad. But the people who used to run Nigerian Prince scams now have ai. Advertisers have ai. The bosses who want to cut jobs have ai. The cops who want to ensure there’s no revolts from the folks getting fucked by the system have ai. So yeah I don’t think I can get nearly as much out of it as the people who want to use it in ways that will/could negatively impact me will. So I’m not excited for it or happy about it, and I’m terrified because the people who seem really excited about it seem blind to it’s weaknesses

    • Kalothar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Well, for one clearly this creates more mechanisms to exploit the poor. Especially if we chose to regulate as slowly as we have with other tech in the past.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      If you manage to keep your job then sure, you’ll be way more efficient. I guess AI will help you with your job search and resume if you’re laid off, but maybe companies won’t need as many people as they used to. 🤷

      • coheedcollapse@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I don’t know if it’s just me or what, but I don’t think AI, and eventually androids, replacing humans doing awful grunt work is really bad, it’s a system that refuses to figure out a way to tax corporations using AI to support those displaced workers.

        • BrianTheeBiscuiteer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          For decades it’s been the grunt work that was automated and outmoded. Suddenly it’s highly educated individuals that are nearing the chopping block.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          No, the point of AI is not that you work better, faster and more efficiently. The point of AI is that you will not be necessary anymore.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Ask the thousands of information laborers, some who might’ve think the same as you, who no longer have a job because they were layoff when the manager got swindled by OpenAI marketing.