By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

  • Emily (she/her)
    link
    fedilink
    arrow-up
    40
    ·
    3 months ago

    After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

    Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 months ago

      I think this is the best response in this thread.

      Software engineering is a lot more than just writing some lines of code and requires more thought and planning than can be realistically put into a prompt.

      • Em Adespoton@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        The other thing is, an LLM generally knows about all the existing libraries and what they contain. I don’t. So while I could code a pretty good program in a few days from first principles, an LLM is often able to stitch together some elegant glue code using a collection of existing library functions in seconds.

    • netvor@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Also in my experience LLM can often propose solutions which are working but way too complex.

      Story time: just yesterday, in VueJS I was trying to iterate over a list of items and render .text of reach item as HTML, but I needed to process it first. Note that in VueJS this is done by adding eg. <span v-html="item.text"></span> where content of the attribute is the JavaScript expression needed to get the text.

      First I asked ChatGPT to write the function for processing the text. That worked pretty well and even used part of the JavaScript API which I was not aware about.

      Next, I had a “dumb moment” when I did not realize that as I’m iterating through items I can just say <span v-html="processHtml(item.text)"></span>, that’s all I really needed. Somehow I thought (or should I say, “hallucinated”, ba dum tsss) for a moment that v-html is special or something (it is used differently than the most abundant type of syntax). So I went ahead and asked ChatGPT how to render processed texts while iterating.

      It came with a rather contrived solution which involved creating another computed property containing a list of processed texts. I started to integrate it into the existing loop: I would have to add index and use that index to pull the code from the computed property, which already felt a little bit weird.

      That’s when it struck me: no, no, no, I can just f*ing use the function.

      TL; DR: The point is, while ChatGPT was helpful I still needed to babysit it. And if I didn’t snap from my lazy moment, or if I simply didn’t know better, I would end up with code which is more complex, more surprising, which means harder to reason about for both humans and LLM’s. (For humans because now it forces you to speculate about coder’s intent, and for LLM’s because it’s less likely to be reminiscent of surrounding code in its learning data.)