Users of OpenAI’s GPT-4 are complaining that the AI model is performing worse lately. Industry insiders say a redesign of GPT-4 could be to blame.

    • conditional_soup@lemm.ee
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      This isn’t sustainable. They’re banking that nobody else is going to be able to achieve GPT-4-like quality, and what with us basically being at near the bottom of the vertical bit of the growth curve, I’d say that’s a little like betting that nobody’s going to be able to build a car that beats the Model T’s performance. Meta is trying to tackle very large language models in the same way that they got React to be so good and widely supported: by taking it open source. Google, on the other hand, is currently working on having LLMs running natively on phones and tablets. That’s not to speak of the fully open source models. Yeah, running a 1.6 trillion parameter GPT-based LLM is fucking expensive and difficult to replicate, but there are newer, more efficient techniques popping up around LLMs at a dizzying pace. It’s only a matter of time before someone comes up with something that’s at least as good as GPT 4.

      • jrs100000@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        A popular venture capital backed tech project with an unsustainable business model? Now Ive heard everything. /s

        • conditional_soup@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Yeah, that’s just crazy talk. Next you’re going to tell me that they’re going to start hand crafting bills and spending millions in advertising to get them passed.

    • Quokka@quokk.au
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      Good, they should be seperate.

      You don’t want a medical llm trained on Internet memes or a coding llm trained to write poetry. Specialisation exists for a reason.

      • brsrklf@compuverse.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Honest question, why would you want a medical LLM anyway? Other kinds of AI, sure, like diagnosis help through pattern learning on medical imaging, etc, that I can understand.

        How is a language based approach that completely abstracts away actual knowledge, and just tries to sound “good enough” any kind of useful in a medical workflow?

        • Muehe@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          How is a language based approach that completely abstracts away actual knowledge, and just tries to sound “good enough” any kind of useful in a medical workflow?

          A LLM cross-referencing a list of symptoms against papers and books could be helpful for example. There is so much medical literature available these days and in so many languages that no one person can hope to gain a somewhat clear overview, much less keep up with all the new stuff coming out.

          Of course this should only be in assistance to a trained medical professional, as all neural networks are prone to hallucinations. You should also double-check results of NNs that interpret medical images, they may straight-up hallucinate or just pick up on correlation instead of causation (say all the cancer images in your training set having a watermark from the same lab or equipment manufacturer).

        • Cheers@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I work in the assisted living field. There’s frequently 1 nurse tending 40+ beds for 8 hours. If the next nurse is late, that’s 1 nurse for 8+ hours until the next one shows. You can bet your ass that nurse isn’t providing high quality medical advice 12 hours into a shift. An ai can take a non partial perspective and output a baseline level of advice to help the wheels moving.

      • Snapz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This isn’t a person, it’s a machine. It doesn’t have the same limitations. Higher compute cost, but it can do multiple things at once.

        It’s not good of it’s creating artificial demand and leading to less accessibility and higher costs.

    • btaf45@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      A lot of people in the media are routinely confused about the different between AI and ordinary software. They are started to call all software “AI” now.

      • joshinya@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Can you quantify the difference? Far as I can tell, there’s just an imaginary line where software becomes AI just because the logic filtering it depends on to operate is sufficiently complex. The term doesn’t really seem to be a useful categorization either, e.g. the fundamentally different approaches of diffusion models and transformer models.

    • RIotingPacifist@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      But the only thing it’s actually good at is generating languages, if they try and pretend to know stuff in fields, they’re quickly exposed as frauds.

      • MercuryUprising@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        It doesn’t even “know” language. Every time I see it write a poem it reads like something a 3rd grader would come up with. At the end of the day, language is way to explain your experience. An LLM doesn’t have experiences.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Ah, yes, when I was a kid, I would try to read big texts I understood nothing of and imitate something similar. I thought it made me smarter.

        In some sense it did - probabilities of certain words being connected in a certain way, if you make some connection between them and real entities, are useful.

        I mean, it did work at school, just say some water without turning on your brain. I sometimes start talking like this when I panic after a question.

      • Hello Hotel@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I cant express my diappointment with chatgpt, they let loose a bot that makes content farms shreek in joy but messes up basic things if their is no well treaded answer, wont give you non mainstream answers (you likely already know and watched what it tells you is “really obscure anime”) And jenuinely has no tolerance for error, from you or itself

        • Chaotic Entropy@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          I think the fact that they are sitting on that sweet, sweet first-to-market money consoles them somewhat.

    • jcit878@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      yeah this makes more sense. companys arent just going to buy a licence to GPT-6 and replace 80% of their staff from an off the shelf solution, rather I expect AI’s will be trained specifically within certain industries and tasks and drive efficiencies

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Reminds me of “Ananke” by Lem. How stupid we were to believe that this particular cause of catastrophe is architecturally impossible in computing.