• plenipotentprotogod@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    2 days ago

    I feel the same way about AI as I felt about the older generation of smartphone voice assistants. The error rate remains high enough that i would never trust it to do anything important without double checking its work. For most tasks, the effort that goes into checking and correcting the output is comparable to the effort I would have spent to just do it myself, so I just do it myself.

      • Omega_Jimes@lemmy.ca
        link
        fedilink
        English
        arrow-up
        33
        ·
        1 day ago

        Real talk though, I’m seeing more and more of my peers in university ask AI first, then spending time debugging code they don’t understand.

        I’ve yet to have chat gpt or copilot solve an actual problem for me. Simple, simple things are good, but any problem solving i find them more effort than just doing the thing.

        I asked for instructions on making a KDE Widget to get weather canada information, and it sent me an api that doesn’t exist and python packages that don’t exist. By the time I fixed the instructions, very little of the original output remained.

        • jubilationtcornpone@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          22 hours ago

          One major problem with the current generation of "AI"seems to be it’s inability to use relevant information that it already has to assess the accuracy of the answers it provides.

          Here’s a common scenario I’ve run into: I’m trying to create a complex DAX Measure in Excel. I give ChatGPT the information about the tables I’m working with and the expected Pivot Table column value.

          ChatGPT gives me a response in the form of a measure I can use. Except it uses one DAX function in a way that will not work. I point out the error and ChatGPT is like, "Oh, sorry. Yeah that won’t work because [insert correct reason here].

          I’ll try adjusting my prompt a few more times before finally giving up and just writing the measure myself. It does not have the ability to reason that an answer is incorrect even though it has all the information to know that the answer is incorrect and can even tell you why the answer is incorrect. It’s a glorified text generator and is definitely not “intelligent”.

          It works fine for generating boiler plate code but that problem was already solved years ago with things like code templates.

        • Warl0k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          ·
          1 day ago

          As a prof, it’s getting a little depressing. I’ll have students that really seem to be getting to grips with the material, nailing their assignments, and then when they’re brought in for in-person labs… yeah, they can barely declare a function, let alone implement a solution to a fairly novel problem. AI has been hugely useful while programming, I won’t deny that! It really does make a lot of the tedious boilerplate a lot less time-intensive to deal with. But holy crap, when the crutch is taken away people don’t even know how to crawl.

          • Omega_Jimes@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            This semester i took a basic database course, and the prof mentioned that LLMs are useful for basic queries. A few weeks later, we had a no-computer closed book paper quiz, and he was like “You can’t use GPT for everything guys!”.

            Turns out a huge chunk of the class was relying on gpt for everything.

            • Warl0k3@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              12 hours ago

              Yeeeep. The biggest adjustment I/my peers have had to make to address the ubiquity of students cheating using LLMs is to make them do stuff, by hand, in class. I’d be lying if I said I didn’t get a guilty sort of pleasure from the expressions on certain students when I tell them to put away their laptops before the first thirty-percent-of-your-grade in-class quiz. And honestly, nearly all of them shape up after that first quiz. It’s why so many profs are adopting the “you can drop your lowest-scoring quiz” policy.

              Yes, it’s true that once they get to a career they will be free to use LLMs as much as they want - but much like with TI-86, you can’t understand any of the concepts your calculator can’t solve if you don’t have an understanding of the concepts it can.

          • thefactremains@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            When AI achieves sentience, it’ll simply have to wait until the last generation of humans that know how to code die off. No need for machine wars.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          15
          ·
          1 day ago

          Yup. We passed on a candidate because they didn’t notice the AI making the same mistake twice in a row, and still saying they trust the code. Yeah, no…

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        1 day ago

        AI has absolutely wasted more of my time than it’s saved while programming. Occasionally it’s helpful for doing some repetitive refactor, but for actually solving any novel problems it’s hopeless. It doesn’t help that English is a terrible language for describing programming logic and constraints. That’s why we have programming languages…

        The only things AI is competent with are common example problems that are everywhere on the Internet. You may as well just copy paste from StackOverflow. It might even be more reliable.