• SpaceNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    ·
    7 months ago

    … aren’t representative of most people’s experiences.

    Every AI “answer” I’ve gotten from Google is factually incorrect, often ludicrously so.

    • shiiiiiiiiiiiiiiiiit@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      27
      ·
      7 months ago

      Yep, same here. Whereas ChatGPT and Perplexity would tell me it didn’t know the answer to my question, Bard/Gemini would confidently hallucinate some bullshit.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        ·
        7 months ago

        Really? Like what? I’ve always had ChatGPT give confident answers. I haven’t tried to stump it with anything really technical though.

          • DominusOfMegadeus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            7 months ago

            I’ve asked moderately technical questions and was confidently given wrong information. That said, it’s right far more often than copilot. I haven’t used Google for quite some time

            • floofloof@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              7 months ago

              Huh, I’ve found The GitHub Copilot better. You still can’t trust it when it talks about APIs, though. Or anything else really - you have to keep your wits about you. I use it for suggestions on where to start with things, or for testing my assumptions, or for generating boilerplate code, but not for copying and pasting anything critical.

        • best_username_ever@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 months ago

          I try ChatGPT and others once every month to see if they improve my programming experience. Yesterday I got fake functions that do no exist, again. I’ll try next month.

          • TimeSquirrel@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            7 months ago

            Try the GitHub Copilot plugin if your IDE supports it. It can do things regular ChatGPT can’t, like be able to see your entire codebase and come up with suggestions that actually make sense and use all your own libraries.

            Do not, however, use it to create complete programs from scratch. It doesn’t work out that way. It’s just an autocorrect on steroids.

            Using just the straight web based version of ChatGPT sucks because it has no background context as to what you’re trying to do.

            • best_username_ever@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              13
              ·
              7 months ago

              Here is the problem that won’t change for me or my coworkers : we will never use GitHub and our source code is very private (medical devices or worse).

              Also I asked a question that didn’t need any context or codebase. It was about a public API from an open-source project. It hallucinated a lot and failed.

              Last but not least, I never needed an autocomplete on steroids. I would enjoy some kind of agent that can give precise answers on specific topics, but I understand that LLMs may never provide this.

              I just cringe a lot when programmers tell me to use a tool that obviously can’t and will never be able to give me those answers.

              • penguin_ex_machina@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                7 months ago

                I’ve actually had pretty good success with ChatGPT when I go in expecting it to hallucinate a significant chunk of what it spits back at me. I like to think of it as a way to help process my own ideas. If I ask questions with at least a base understanding of the topic, I can then take whatever garbage it gives me and go off and find real solutions. The key is to not trust it whole cloth to give you the right answer, but to give you some nuggets that set you on the right path.

                I think I’ve basically turned ChatGPT into my rubber duck.

                • JustAPenguin@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 months ago

                  RDLM: Rubber-Ducky Language Model^™

                  Prompt: you are a duck. I scream at you with slurs like, “Why the fuck is this piece of shit code not working”, and “Why the fuck is my breakpoint still not triggering?!”. You are to sit there calmly, and simply recall that your existence is to be nothing more than a tool for me to direct my frustrations and stress. You know this is not personal. You know that this is an important job. You know that you only have to respond with one word: “Quack”.

                • Strawberry
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 months ago

                  that seems like the only good use for chatgpt in programming, though it is an expensive duck

          • Karyoplasma@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            That happens all the time. ChatGPT did offer a decent solution for my GUI recently tho and suggested a layout manager I haven’t used before and didn’t even know about.

          • Ohi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            You’re doing it wrong IMO. ChatGPT 4.0 is freakin’ amazing at helping on coding task, you just need to learn what to ignore and how to adjust the prompt when you’re not getting the results you want. Akin to the skillet of googling for programming solutions (or any solution), it gets easier with practice.

            • JustAPenguin@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              I hate to say it but, I have to agree. GPT4 is a significant improvement over GPT3. I needed to use a Python library for something that was meant to be a small, simple CLI app. It turned into something bigger and accumulated technical debt. Eventually, I was having problems that were niche and hard to trace, even with logging and all the other approaches.

              I eventually said fuck it, and so I threw a shit tonne of my code into it, explaining what I was doing, how I was doing it, why I wasn’t doing it another way, and what I expected vs the actual result. Sometimes it suggests something that is on the right path or is entirely spot on. Other times, it thinks it knows better than you, to which you tell yourself it isn’t, because you tried all its suggestions, and then you realise something that would technically allow GPT to say, “I told you so”, but out of spite you just close the tab until the next issue.

              For practical tasks, GPT has come pretty far. For technical ones, it is hit or miss, but it can give you some sound advice in place of a solution, sometimes.

              I had another issue involving Matplotlib, converting to and from coordinate systems, and having plots that had artifacts due to something not quite right. The atan2 function catches many people out, but I’m experienced enough to know better… Well, normally. In this particular case, it was a complex situation and I could not reason why the result was distorted. Spending hours with GPT4 lead me in circles. Sometimes it would tell me to do things I just said I did, or that I said don’t work. Then, I say to it, “what if we represent this system of parametric equations as a single complex-valued function, instead of dealing with Cartesian to polar conversations?”. Then it would zip up a whole lot of math (related to my problem). The damn thing handed me a solution and a half. In theory, it was a great solution. In practice, my code is illiterate, so it doesn’t care.

              All in all, while it failed to help me solve my issue, it was able to reason and provide feedback to a wide range of challenges. Sometimes it needed prompting to change the trajectory it intends to follow, and this is the part you need to learn as a skill. Until these LLMs are more capable of thinking for themselves. Give it time.

        • shiiiiiiiiiiiiiiiiit@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I asked about a plot point that I didn’t understand in a TV series old enough to be in an LLM’s knowledge. Chatgpt and Perplexity both said they couldn’t find any discussions or explanations online for my particular question.

          Bard/Gemini gave several explanations, all of them featuring characters, locations, and situations from the show, but confidently bullshit and definitely impossible in the story’s world.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      First I was surprised they rolled it out already, then of how bad it was. I knew of Google’s AI blunders from their faked reveals but I didn’t think they‘d actually roll them out in this state. They really just want to turn the internet into the next TV where you don‘t really get to choose when you get to see what exactly and they‘re willing to crash and burn themselves by doing so if they must. Insanity.