People Are Increasingly Worried AI Will Make Daily Life Worse::A Pew survey finds that a majority of Americans are more concerned than excited about the impact of artificial intelligence—adding weight to calls for more regulation.

    • @captainlezbian@lemmy.world
      link
      fedilink
      English
      310 months ago

      Not really. It hallucinates so much I don’t use it for factual information. It has massive glaring issues in applications like driverless cars. I suppose that applications like a driverless train would be nice but it’s not something I expect anytime soon. I suspect I’ll be told to like it when it tries to get me to consume more.

      Maybe better ai in video games will be nice.

      Maybe I’ve just become a cranky old lady, but while I can acknowledge actual theoretical value in it when I hear ai hype it feels like listening to crypto bros at worst and at best like listening to an executive telling me I need to implement lean manufacturing and plugging their ears when I want to discuss the costs and risks.

        • @wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          310 months ago

          If there was evidence AI was heading that direction at all, that direction was where society wanted to move AI to, and that there was the understanding we absolutely aren’t there yet… I’d be significantly more optimistic.

          My problem is that currently, Machine Learning and Expert Systems are being implemented quietly by a number of companies to at best to improve their own commercial offerings and at worst to cut their human staffed support teams to ribbons. Nearly everyone can relate to frustrations of seeking support with an automated system instead of a human. Those situations have continued to get worse, instead of better, as this tech has grown.

          Additionally, thanks to how convincing LLMs are at appearing intelligent, they’ve become a fad rather than being evaluated and appreciated for what they actually are. There are countless startups now who are just trying to cash in on the hype by using the ChatGPT api to offer products that just shove GPT at all sorts of entirely unsuitable use cases.

          Lastly, there are a good deal of issues with the currently most popular AI tech, LLMs, that the industry appears to have no intention of attempting to address in good faith. The complete disdain for copyright, IP, or even fair use when it comes to the data the models have been trained on. The recent articles stating that in order to remove material from a dataset would require effectively rebuilding the LLM. The lack of methodology to get true sources for the data used in responses, lack of reproducability of responses, lack of any auditability of these systems because that would jeapordize the “secret sauce” or is just simply impossible on a technical level. And when most people discuss this they get shouted down by the “true believers” as just not understanding the technology rather than any attempt at discussion in good faith. If you have concerns you’re either stupid or against technological advancement. Don’t you see all the good this could potentially do in the future but it it isn’t doing yet?


          I would love for the type of trustworthy, helpful digital assistant it sounds like you’re describing. I’ve wanted that technology for well over a decade. We’re just not there yet.

        • @captainlezbian@lemmy.world
          link
          fedilink
          English
          210 months ago

          That sounds really nice and we get to the root cause of my issue here: I don’t think that that is what will happen. I’m not saying to ban the stuff or anything but when I see how it’s being sold to the investors I’m not seeing reasonable and achievable plans of action that benefit everyone. I’m seeing gimmicks, ads, and moonshots. All while the dishonest are getting a lot out of it. I’m seeing it at its most effective being a means to increase the power of the capital holding class because that’s who’s investing in it and I don’t think that training such things will get cheaper.

          And I expect them to improve yes, but I’m also concerned with methodological failures. And I’m not saying that it’ll never make life better, but right now in 2023 I’m not impressed by what I’m seeing. And that’s before I get into the realm of the tendency for trends like this to blind policy makers and business leaders. Hyper loop was sold as being for autonomous vehicles and specifically made to not be cheaply convertible to a known better solution. The whole fucking cloud computing craze comes to mind as well.

          I will cede one thing here though. I do think it has a lot of room for use as one of many engineering tools to help with the design process. Being able to directly compare to known optimization methods is always going to be useful and if it can automatically plug a layout or process into a model it would be nice. Idk if I expect that to happen as well as anyone seems to think though.

          I guess I just don’t trust the tech industry anymore. When I see something like LLMs it seems gimmicky as hell and a lot of early adoption is either minor or harmful. I see driverless cars getting priority over public transit over and over despite the fact that they’ve been 5 years away since I was a kid. I see people talking about using AI to help the fight against climate change from the same people who won’t quit meat. Meanwhile surveillance increases, wages stay stagnant, and the world keeps getting hotter. Contrary to how I sound I love technology. I’m an engineer for a reason. But there’s just so many reasons to feel skeptical of it. So yeah enjoy your hype. If it winds up useful for someone like me I’ll try it. But I’m not buying into the hype and I’ll be skeptical of it until I start seeing actual results.