I think AI is neat.

  • @skilltheamps@feddit.de
    link
    fedilink
    6
    edit-2
    5 months ago

    Yes. But the more advanced LLMs get, the less it matters in my opinion. I mean of you have two boxes, one of which is actually intelligent and the other is “just” a very advanced parrot - it doesn’t matter, given they produce the same output. I’m sure that already LLMs can surpass some humans, at least at certain disciplines. In a couple years the difference of a parrot-box and something actually intelligent will only merely show at the very fringes of massively complicated tasks. And that is way beyond the capability threshold that allows to do nasty stuff with it, to shed a dystopian light on it.

    • Dr. Jenkem
      link
      fedilink
      English
      125 months ago

      I mean of you have two boxes, one of which is actually intelligent and the other is “just” a very advanced parrot - it doesn’t matter, given they produce the same output.

      You’re making a huge assumption; that an advanced parrot produces the same output as something with general intelligence. And I reject that assumption. Something with general intelligence can produce something novel. An advanced parrot can only repeat things it’s already heard.

        • @rambaroo@lemmy.world
          link
          fedilink
          15 months ago

          LLMs can’t produce anything without being prompted by a human. There’s nothing intelligent about them. Imo it’s an abuse of the word intelligence since they have exactly zero autonomy.

    • @Kecessa@sh.itjust.works
      link
      fedilink
      8
      edit-2
      5 months ago

      The difference is that you can throw enough bad info at it that it will start paroting that instead of factual information because it doesn’t have the ability to criticize the information it receives whereas an human can be told that the sky is purple with orange dots a thousand times a day and it will always point at the sky and tell you “No.”

      • R0cket_M00se
        link
        fedilink
        English
        25 months ago

        To make the analogy actually comparable the human in question would need to be learning about it for the first time (which is analogous to the training data) and in that case you absolutely could convince the small child of that. Not only would they believe it if told enough times by an authority figure, you could convince them that the colors we see are different as well, or something along the lines of giving them bad data.

        A fully trained AI will tell you that you’re wrong if you told it the sky was orange, it’s not going to just believe you and start claiming it to everyone else it interacts with. It’s been trained to know the sky is blue and won’t deviate from that outside of having its training data modified. Which is like brainwashing an adult human, in which case yeah you absolutely could have them convinced the sky is orange. We’ve got plenty of information on gaslighting, high control group and POW psychology to back that up too.

        • @Kecessa@sh.itjust.works
          link
          fedilink
          5
          edit-2
          5 months ago

          Feed LLMs all new data that’s false and it will regurgitate it as being true even if it had previously been fed information that contradicts it, it doesn’t make the difference between the two because there’s no actual analysis of what’s presented. Heck, even without intentionally feeding them false info, LLMs keep inventing fake information.

          Feed an adult new data that’s false and it’s able to analyse it and make deductions based on what they know already.

          We don’t compare it to a child or to someone that was brainwashed because it makes no sense to do so and it’s completely disingenuous. “Compare it to the worst so it has a chance to win!” Hell no, we need to compare it to the people that are references in their field because people will now be using LLMs as a reference!