TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30

  • taj@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is what I keep telling my friends who use them to ‘write research papers/articles’. It’s just a bunch of bs, that I don’t trust.

    Thanks, but I’m going to continue to research and lookup my own info.

    • Lemdee@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I tried to use ChatGPT help speed up my writing, like I tried using “give me a brief history of Queen Dido’s relationship with Caesar” when working on a character inspired by Queen Dido and then it gave me completely false information relating to some other historical figure.

      Until it can get more trustworthy in that regard I’m not sure how effective it is for writing assistance let alone writing the entire paper by itself.

      • liontigerwings@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This might sound funny but try bing chat instead. Chat gpt uses 3.5 which is prone to hallucination as they call it. Bing uses gpt4 and also shows sources. I found it to be generally better at everything, especially since it has access to the Internet. You can look up stuff past 2021

  • Peanut@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    not sure what you’re saying here. are you claiming it can’t do any sort of reasoning or open-ended problem solving?

    i think we’re fairly confident now that they can do structured reasoning to some degree. it is not flawless in that it might not give you real or accurate information every time, but we are also figuring out the contexts behind that. as for spreading misinformation, anything intentional prompted to be incorrect is irrelevant to gauging intelligence. unintentional results don’t necessarily mean it’s unintelligent either.

    there’s a really good document on this aspect as well.

    https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post

    there are a lot of ethical and technical aspects of LLMs that are severely underdeveloped, but that shouldn’t be a surprise to anyone. i don’t think any of that would suggest that it’s reasonable to disregard the absurd pace of development this past decade, and last few years especially. good thing we have a sudden surge of attention towards developing these things.