Just in case someone doesn’t know, LLM in this case means “Large Language Model”, which is just the technical term for things like ChatGPT.

  • AtmaJnana@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Did you read my comment at all? I was replying to a comment about the level of effort, which is what my analogy addresses.

    Your hyperbole not withstanding, if the accuracy isnt good enough for you, dont use it. Lots of people find that LLMs are useful even in their current state of imperfect accuracy.

    • DoYouNot@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Did you read mine? If you wanted a depiction of a city, it’s more than good enough. In fact it’s amazing what it can do in that respect. My point is: it gets major details wrong in a way that feels right. That’s where the danger lies.

      If your GPS consistently brought you to the wrong place, but you thought it was the right place, do you not think that might be a problem? No matter how many people found it useful, it could be dangerously wrong in some cases.

      My worry is precisely because people find it so useful to “look things up”, paired with the fact that it has a tendency to wildly construct ‘information’ that feels true. It’s a real, serious problem that people need to understand when using it like that.