this is AI but it felt a lot more guy with broken gear

  • deborah@awful.systems
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    This is a good piece, both on the gap between how gen AI is sold and what it does, and on the reality of what professional programming is.

    • dohpaz42@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      … by the time you’ve spent four hours tearing your hair out, … the code … to fix your problem is one, single line.

      This, I feel, sums up the reality of professional programming in a nutshell. 🤣

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        OK sorry this is rambly but I gotta get these programmer feelings off my chest… If anything 4 hours is an understatement.


        Back in university I once spent an entire week tracking down a latent bug in my program after the professor changed the project requirements a week before the due date. It was an accidental use of =instead of a copy in Java. We’re talking every waking moment both in and out of class (I was not the best at debugging back then…).

        Now in the working world there’s bugs-- but they’re not just my bugs anymore. Rather there’s decades of bugs piled on top of bugs. Code has dozens of authors, most of whom quit long ago. The ones that remain often have no memory of the code.

        Just last week I did a code review of a co-workers bugfix for a bug introduced in 2008. The fix was non-trivial due to:

        1. The code being a tangled mass of overlapping state and (more importantly)
        2. No one actually remembering anything about the code or where it is called or why it is there in the first place or what the implications of changing it are. Except that it’s causing problems (An O(n^2) slowdown case harming production) now in 2024.
        3. The original design doc was in the personal folder of the original author (no longer at the company), which was garbage collected years ago.

        So reviewing the code involved comparing every iteration of the code, from the initial commit, up to where the bug was introduced, up to the state it was in today before my coworkers fix, and my coworkers fix. It turns out he got it wrong, and I can’t exactly blame him because there is no right in this sort of environment. Fortunately the wrongness was caught by me and whatever meager unit-tests were written for it.

        This all took maybe half a day for me, and a day for my coworker, for 1.5 days of work between the two of us. All to fix a condition which was accidentally negated from what it should have been.


        And this is indeed what LLM for code enthusiasts miss.

        Even if the LLM saves some time with writing boilerplate code, it’ll inevitably mess up in subtle ways, or programmers will think the LLM can do more than they actually can. Either way they’ll end up introducing subtle bugs; so you have a situation where someone saving 20 seconds here or there leads to hours of debugging effort, or worse, at an unpredictable point in the future.

        At least with human written code you can go back and ask them what they were thinking, or read the design doc, or read comments and discussion. Even the most amateurish human author code has the spark of life to it. It was in essense a manifestation of someone’s wish.

        On the other hand with code that’s just statistical noise there’s no way to tell what it was trying to do in the first place. There is no will / soul / ego in the code, so there is no understanding, so there is no way to debug it short of reverting the whole change and starting over.

  • V0ldek@awful.systems
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    While I agree mostly with the blunt of the thesis - 80% of the job is reading bad code and unfucking it, and ChatGPT sucks in all the ways - I disagree with the conclusions.

    First, gen AI shifting us towards analysing more bad code to unfuck is not a good thing. It’s quite specifically bad. We really don’t need more bad code generators. What we need are good docs, slapping genAI as a band-aid for badly documented libraries will do more harm than good. The absolute last thing I want is genAI feeding me with more bullshit to deal with.

    Second, this all comes across as an industrialist view on education. I’m sure Big Tech would very much like people to just be good at fixing and maintaining their legacy software, or shipping new bland products as quick as possible, but that’s not why we should be giving people a CS education. You already need investigation skills to debug your own code. That 90% of industry work is not creative building of new amazing software doesn’t at all mean education should lean that way. 90% of industry jobs don’t require novel applications of algebra or analytical geometry either, and people have been complaining that “school teaches you useless things like algebra or trigonometry” for ages.

    This infiltration of industry into academia is always a deleterious influence, and genAI is a great illustration of that. We now have Big Tech weirdos giving keynotes on CS conferences about how everyone should work in AI because it’s The Future™. Because education is perpetually underfunded, it heavily depends on industry money. But the tech industry is an infinite growth machine; it doesn’t care about any philosophical considerations with regards to education; it doesn’t care about science in any way other than as a product to be packaged and shipped ASAP to grow revenue, doesn’t matter if it’s actually good, useful, sustainable, or anything like that. They invested billions into growing a specialised sector of CS with novel hardware and all (see TPUs) to be able to multiply matrices really fast, and the chief uses of that are Facebook’s ad recommendation system and now ChatGPT.

    This central conclusion just sucks from my perspective:

    It’s how human programmers, increasingly, add value.

    “Figure out why the code we already have isn’t doing the thing, or is doing the weird thing, and how to bring the code more into line with the things we want it to do.”

    While yes, this is why even a “run-of-the-mill” job as a programmer is not likely to be outsourced to an ML model, that’s definitely not we should aspire the value added to be. People add value because they are creative builders! You don’t need a higher education to be able to patch up garbage codebases all week, the same way you don’t need any algebra or trigonometry to work at a random paper-pushing job. What you do need it to is to become the person that writes the existing code in the first place. There’s a reason these are Computer Science programmes and not “Programming @ Big Tech” programmes.

    • David Gerard@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      It didn’t read to me like she was a fan of this shit at all, but was despairing of it and looking for ways to teach actual competence despite it

      • V0ldek@awful.systems
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        6 months ago

        I’m probably projecting a baggage of dozens of conversations with people that unironically argue that a CS university should prepare you for working in industry as a programmer, but that’s because I can’t really discern the author’s perspective on this from the text.

        In either case,

        to teach actual competence despite it

        I think my point is that “competent programmer” as viewed by the industry is a vastly different thing than a “competent computer scientist” in a philosophical sense. Computer science really struggles with this because many things require both being a good engineer and a good scientist? For an analogy, an electric engineer and a physicist specialising in electrical circuits are two vastly different professions, and you don’t need to know what an electron is to do the first. Whereas in computer science, like, you can’t build a compiler without knowing your shit both around software engineering and theoretical concepts.

        Let me also add that I think I never wrote a post where I would more like people to come and disagree with me. I might be very well talking some bullshit based on my vibes here, since all of this is basically vibes from mingling around with both industry and academia people…

        • zogwarg@awful.systems
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          If you keep in the mind the original angst of the students “I have to learn how to use LLMs or I’ll get left behind” they themselves have a vocational understanding of their degree. And it is sensible to address those concerns practically (though as stated in another comment, I don’t believe in accepting the default use of generative tools).

          On a more philosophical note I think STEM fields (and any really general well-rounded education) would benefit from delving (!) deeper in library science/archival science/philosophy and their application to history, and that coincidentally that would make a lot of people better at troubleshooting and legacy code untangling.

          • deborah@awful.systems
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            would benefit from delving (!) deeper in library science/archival science/philosophy and their application to history

            Ooh, would you say more about this? I have opinions, but that’s because I’m a programmer now but formerly a librarian & archivist (on the digital side, it’s more common to go back and forth between them; it’s the same degree).