• the_q@lemmy.zip
    link
    fedilink
    English
    arrow-up
    89
    ·
    3 个月前

    That’s the point though. When data means nothing truth is lost. It’s far more sinister than people are aware it is. Why do you think it is literally being shoved into every little thing?

    • Kogasa@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 个月前

      Capitalizing on a highly marketable hype bubble because the technology is specifically designed to deceive people into thinking it’s more capable than it is

    • breecher@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 个月前

      It is already making pictorial evidence worthless, which is a scary thought no justice system has even begun considering yet, even though it is literally already happening. Criminals all over the world rejoice, they can be caught doing the act on video, and it will be worthless. Of course this applies even more to large scale criminals like dictators. It will all be “fake news” from now on.

  • Sadbutdru@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    47
    ·
    3 个月前

    Right, I’m no expert (and very far from an AI fanboi), but not all “AI” are LLMs. I’ve heard there’s good use cases in protein folding, recognising diagnostic patterns in medical images.

    It fits with my understanding that you could train a similar model on more constrained datasets than ‘all the English language text on the Internet’ and it might be good at certain jobs.

    Am I wrong?

    • alk
      link
      fedilink
      English
      arrow-up
      73
      ·
      3 个月前

      You are correct. However, more often than not it’s just like the image describes and people are actually applying LLM’s en masse to random problems.

    • IO 😇OP
      link
      fedilink
      English
      arrow-up
      37
      ·
      3 个月前

      what ai, apart from language generators “makes up studies”

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      28
      ·
      3 个月前

      Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.

    • jaredwhite@piefed.social
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 个月前

      Technically, LLMs as used in Generative AI fall under the umbrella term “machine learning”…except that until recently machine learning was mostly known for “the good stuff” you’re referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 个月前

        There is no generative AI. It’s just progressively more complicated chatbots. The goal is to fool the human into believing it’s real.

        Its what Frank Herbert was warning us all about in 1965.

        • fushuan [he/him]@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 个月前

          Chatbprs are genAI. Any artificial intelligence like NPCs, autopilot, playing games against the machine, playing chess against the machine… All of those have been called AI.

          GenAI is a subset where what the AI does is generate text or images instead of taking a deterministic option. GenAI describes pretty well what it does generate a text or image output, no matter the accuracy of the text. The AI is optimised to generate output that looks like what you would expect with the given input, and generally it does exactly that, even if it hallucinated facts to fit the idea of the response that they are supposed to give with the given input.

    • baggachipz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 个月前

      That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.

    • Sadbutdru@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 个月前

      Obviously that should be in an advisory capacity, and not making decisions (like approving drugs for human use [which i heavy doubt was actually happening])

    • minnow@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 个月前

      Right. You’re talking about specialized AI that are programmed and trained to perform very specific tasks, and are absolutely useless outside of those tasks.

      Llama are generalized AI which can’t do any of those things. The problem is that what it’s good at, really REALLY good at, is giving the appearance of specialized AI. Of course this is only a problem because people keep getting fooled into thinking that generalized AI can do all the same things that specialize AI does.

    • Tomassci@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 个月前

      The problems with AI we talk of here is mostly with generative AI. Protein folding, diagnostic patterns and weather prediction works a bit differently than image making or text writing services.

    • takeda@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 个月前

      Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    40
    ·
    3 个月前

    I’m constantly mystified at the huge gap between all these “new model obliterates all benchmarks/passes the bar exam/writes PhD thesis” stories and my actual experience with said model.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 个月前

      Likely those new models are varients trained specifically on the exact material needed to perform those tasks, essentially passing the bar exam as if it were open book.

      • Tomassci@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 个月前

        Reminds me of a video that starts with the fact you can’t convince image generating AI to draw a wine glass filled to the brim. AI is great at replicating the patterns that it has seen and been trained on, like full wine glasses, but it doesn’t actually know why it works or how it works. It doesn’t know the things we humans know intuitively, like “filled to the brim means more liquid than full”. It knows the what but doesn’t get the why.

        The same could apply to testing. AI knows how you solve test pages, but wouldn’t be that exact if you were to try adapting it into real life.

  • ssillyssadass@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 个月前

    This reminds me of how like a hundred or so years ago people found “miracle substances” and just put them in everything.

    “Uranium piles can level or power a whole city through the power of Radiation, just imagine what good this radium will do inside your jawbone!”

  • FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    21
    ·
    3 个月前

    Literal… I cannot stress this enough… Literal Idiocracy.

    This is literally what happens in the film. Like the first 10 minutes.

    Fuck.

  • Red_October@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 个月前

    I’m pretty sure that undermining confidence in drug approvals is a feature, not a bug. The same people who were screeching about mRNA vaccines being secret poison that was rushed through approval are the ones doing this now, so when (not if) it does actually lead to dangerous drugs being approved and a collapse in confidence in the FDA, they’ll be the ones saying “We told you so” and getting their anti-medical way.

    It’s the exact same playbook Republicans use in the rest of the government: Say Government doesn’t work, cry about government spending, and insist government regulation is crushing personal freedoms, then they actually do all of those things and when the next administration comes around they pass on the blame and say “I Told You So.”

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 个月前

      The FDA need to get out of the way anyway. So much of what could be done isn’t done because they take their sweet time with decisions.

      The average approval time for a new drug is about a decade mostly because the FDA just don’t do anything for the first 9 and 1/2 years. The covid vaccines were approved in a hot minute though and there was absolutely no issues with them despite what the conspiracy theorists thought. In fact they primarily based their conspiracy theory on the fact that normally the FDA takes forever and today in order to approve anything. Proving only that it doesn’t need to take that long in the first place.

  • Avicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    3 个月前

    Yea I can say I called it. Instead of using graph neural networks trained for such a purpose (which have some actual chance of making novel drug discoveries), these idiots went on and asked chatgpt.

  • Oxysis/Oxy
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 个月前

    I was talking to some friends earlier about LLMs so I’ll just copy what I said and paste it here:

    It really is like a 3d printer in a lot of ways. Marketed as a catch all solution and in reality it has a few things where it’s actually useful for. Still useful but not where you’d expect it to be given what it was hyped up to be.

    • garbagebagel@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 个月前

      From what I’ve seen, 3D printers are best at oversaturating local markets with a bunch of useless trinkets. (Just kidding though I know they have legit medical uses but oh my god)

      • Oxysis/Oxy
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 个月前

        Yeah they do create a lot of weird trinkets but they do have plenty of good uses. It can be used to make props and pieces of an outfit. Really really good for cosplayers. It can have a limited role in engineering. Mostly in low heat and stress environments for the engineering side of things but still useful for a cheap decently easy to replace part. Like you said it has medical applications.

        It’s not an insanely useful tool for most people in most conditions but it’s still a great tool to have in some cases.

      • Obi@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 个月前

        I feel that’s even worse with laser cutters and the cricut stuff, just useless trinkets wasting resources and ending up in drawers/gathering dust/in the bin.

  • MDCCCLV@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 个月前

    For specific things like protein folding “Ai” has been useful but that’s not just a llm.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 个月前

      Yes, machine learning models trained to solve a specific problem can be very good at solving that problem. It’s artificial “general” intelligence we haven’t achieved but are trying to sell.

  • BetaBlake@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 个月前

    So is this a situation where it’s kinda like asking chatgpt to make you drugs so it will go about any means necessary (making up studies) to complete the task?

    Instead of reaching a wall and saying “I can’t do that because there isn’t enough data” I hope I’m wrong but if that’s the case then that is next level stupid.

    • IO 😇OP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 个月前

      no it’s supposed to help help the drug approval workers but everything it says has to be double checked so it ends up wasting time i put the article into the post now