A “natural language query” search engine is what I need sometimes.

Edit: directly reachable with the !ai bang

  • EveryMuffinIsNowEncrypted
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    8 months ago

    I think that’s a little unfair: not everyone has the know-how to verify, and not everyone who can has the know-how to do original research on every potential topic they want to learn about.

    If we all went by your logic here, none of us would put any stock in books, essays, encyclopedias, nothing.

    Yes, comprehending what you read is important, but expecting everyone to original research on everything they want to learn is just not practical.

    AI can be a valuable tool, in addition to critical thinking skills, if used properly.

    • BrikoX@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      You are missing the point. You don’t have to become a subject expert to verify the information. Not all sources are the same, some are incorrect on purpose, some are incorrect due to lax standards. As a thinking human being, you can decide to trust one source over the other. But LLMs sees all the information they are trained on as 100% correct. So it can generate factually incorrect information while believing what it provided you are 100% factually correct.

      Using LLMs as a shortcut to find something is like playing a Russian roulette, you might get correct information 5 out of 6 times, but that one time is guaranteed to be incorrect.

    • Vendetta9076@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      If you think that LLM’S are anything like encyclopedias, you fundamentally misunderstand what an LLM is. Its a story teller. Its not designed to be right its designed to engaging.

      Encyclopedias are designed to be knowledge bases. Things you can rely on to give correct answers. LLM’s are not. They can be pushed towards that, but their very foundation is antithetical to that and it makes them very hard to believe.

      • EveryMuffinIsNowEncrypted
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        If you think that LLM’S are anything like encyclopedias, you fundamentally misunderstand what an LLM is.

        I never said I think they’re anything like encyclopedias; I said that being so skeptical that you feel you have to personally verify every little thing you hear or read or watch would be akin to not trusting second- or third-party sources, such as encyclopedias, books, essays, documentaries, expert opinions, etc.

        Its a story teller. Its not designed to be right its designed to engaging.

        That heavily depends on how its designed and for what purpose. It is not a hard-and-fast rule.

        their very foundation is antithetical to that and it makes them very hard to believe.

        Current iterations maybe. But future iterations will improve. As they say, it’s a learning process.

        • Vendetta9076@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          8 months ago

          That heavily depends on how its designed and for what purpose. It is not a hard-and-fast rule.

          Every current LLM is built this way so it is a hard and fast rule.

          Current iterations maybe. But future iterations will improve. As they say, it’s a learning process.

          I’m only talking about current iterations. No one here knows what the next iterations will be so we can’t comment on it. And right now its incredibly foolish to believe what an LLM tells you. They lie, like a lot.

          • EveryMuffinIsNowEncrypted
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            Every current LLM is built this way so it is a hard and fast rule.

            No, that is a trend, not a rule, and the former of which I would argue is not even 100%. Claude in my experience of using it seems to be designed to be more conversational and factual, not strictly entertaining.

            And right now its incredibly foolish to believe what an LLM tells you. They lie, like a lot.

            I never said you should believe everything an LLM says. Of course a critical mind is important, but one can’t necessarily just assume any answer they give is wrong either just because they’re an LLM. Especially in this stage of LLM development; the technology is still maturing, still in its infancy.

            I’m only talking about current iterations. No one here knows what the next iterations will be so we can’t comment on it.

            Generally the more a technology matures out of its infancy the better it becomes at the job it’s designed for. If an AI is designed to be entertaining, then yes it will be better at that in time; but likewise also if it’s designed for factuals. And I already said what I think about the current state of development in regards to that.

            Therefore, I think it’s a reasonable assumption that as time goes on, the frequency of hallucinations will go down. We’re still working out the kinks, as it is.

            • Vendetta9076@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              8 months ago

              Rule or trend, whatever word you use is semantics at this point. And your experience is irrelevant to the facts of how all current LLM’s are built. They are all built the same way. We have proof they are all built the same way.

              If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?

              We can sit down and speculate all day about what could be but that has no bearing on what is which is the entire point of this discussion.

              • EveryMuffinIsNowEncrypted
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                8 months ago

                Rule or trend, whatever word you use is semantics at this point.

                Hardly. There is a very clear distinction between a rule & a trend.

                And your experience is irrelevant to the facts of how all current LLM’s are built. They are all built the same way. We have proof they are all built the same way.

                They are not all built the same, though. Claude, for instance, is built with a framework of values called “Constitutional AI”. It’s not perfect, as the developers even state, but it is a genuine step in the right direction compared to many of its contemporaries in the AI space.

                If you talk to someone and you know they lie to you 10% of the time, would you ever take anything they day at face value?

                Humans are not tools that can be improved upon. They are sentient beings that have conscious choice. LLMs are the former, and are not the latter.

                They are not 1:1 comparisons as you claim.