As a brand new user of ChatGPT, I have never been so incredibly impressed and rage-inducing frustrated at exactly the same time with any new tech I’ve ever tried.

I was using it to help create some simple javascript functions and debug some code. It could come up with working functions almost immediately that took a really interesting approach that I wouldn’t have thought of. “Boom,” I thought, “this is great! Let’s keep going!” Then, immediately afterwards, it would provide absolute shit that couldn’t and wouldn’t work at all. It couldn’t remember the very code it just outputted to me on multiple occasions, and when asked to make a few minor changes it constantly spouted brand new very different functions, usually omitting half the functionality it had before. But, when the code was directly typed in by me in a message, every time, it did much better.

Seems with every question like that I had to start from scratch every time, or else it would work from clearly wrong (not even close, usually) newly generated, code. For example, if I asked it to print exactly the same function it printed a moment ago, it would excitedly proclaim, “Of course! Here’s the exact same function!” and then print a completely different function.

I spent so much time carefully wording my question to get it to correctly help me debug something that I ended up finding the bug myself, just because I was being so careful in examining my code so I could ask it a question that would give me a relevant answer. So…I guess that’s a win? Lol. Then, just for fun, I told ChatGPT that I had found and corrected the bug, and it took responsibility for the fix.

And yet, when it does get it right, it’s really quite impressive.

  • garyyo@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    You should read a bit more on how LLMs work, as it really helps to know what the limitations of the tech are. But yeah, it’s good when it’s good but a lot of the time it is inconsistent. It is also confident but sometimes just confidently wrong, something that people have taken to call “hallucinations”. Overall it is a great tool if you can easily check it and are just using it to write up your own code writing, but pretty bad at actually generating fully complete code.

    • soft_frog@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      One thing I’ve found is you have to be careful of the context getting polluted with wrong output. If you have one thing wrong, the probability of it using that wrong info is much higher than baseline wrongness.

      In practice that means if it starts spitting out bad code, try a new conversation to refresh things. I find that faster than debugging because it all often return to a buggy state later.

    • Zamboniman@lemmy.caOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yes. Even when I know what the limits are, and why, the thing lulls you into responding as if it were a conscious agent. The downside of the way it produces speech.