• dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    See, that’s your problem. You’re arguing, with me, about something that was said to you by someone else. Do you realize why I’m questioning your argumentative skills?

    Here’s a source to a study about AI’s accuracy as a search engine. The main use case proposed for LLMs as a tool is indexing a bunch of text, then summarizing and answering questions about it in natural language.

    AI Search Has A Citation Problem

    Another use is creating or modifying text based on an input or prompt, however, LLMs are prone to hallucinations. Here’s a deep dive into what they are, why they occur and the challenges of dealing with them.

    Decoding LLM Hallucinations: A Deep Dive into Language Model Errors

    I don’t know why do I even bother. You are just going to ignore the sources and dismiss them as well.

    • uranibaba@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      See, that’s your problem. You’re arguing, with me, about something that was said to you by someone else. Do you realize why I’m questioning your argumentative skills?

      I’m sorry? You came to me.

      Here is how I see it:

      1. Someone compared AI to calculator/calendar
      2. I said you cannot compare that
      3. You asked why I even argue with the first person
      4. I said that I want a better discussion
      5. You said that I should stop dimissing other people’s arguments
      6. I tried to explain why I don’t think it is a valid argument to compare LLM to “calculator can do reliably. or a timer. or a calendar.”
      7. You did not seem to agree with me on that from what I understand.
      8. And now we are here.

      Here’s a source to a study

      I don’t have the time to read the articles now so I will have to do it later, but hallucinations can definitively be a problem. Asking for code is one such situation where an LLM can just make up functions that does not exist.