It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • Haus
    link
    fedilink
    14
    edit-2
    11 months ago

    I’ve had a nagging issue with ChatGPT that hasn’t been easy for me to explain. I think I’ve got it now.

    We’re used to computers being great at remembering “state.” For example, if I say “let x=3”, barring a bug, x is damned well gonna stay 3 until I decide otherwise.

    GPT has trouble remembering state. Here’s an analogy:

    Let Fred be a dinosaur.
    Ok, Fred is a dinosaur.
    He’s wearing an AC/DC tshirt.
    OK, he’s wearing an AC/DC tshirt.
    And sunglasses.
    OK, he’s wearing an AC/DC tshirt and sunglasses.
    Describe Fred.
    Fred is a kitten wearing an AC/DC tshirt and sunglasses.

    When I work with GPT, I spend a lot of time reminding it that Fred was a dinosaur.

    • @rob64@startrek.website
      link
      fedilink
      111 months ago

      Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.