• AeonFelis@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    2 days ago

    I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.

  • LucidLyes@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 days ago

    The only people impressed by AI code are people who have the level to be impressed by AI code. Same for AI playing chess.

  • VagueAnodyneComments
    link
    fedilink
    English
    arrow-up
    54
    ·
    2 days ago

    Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?

    None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        It can make funny pictures, sure. But it fails at art as an endeavor to communicate an idea, feeling, or intent of the artist, the promptfondler artists are providing a few sentences instruction and the GenAI following them without any deeper feelings or understanding of context or meaning or intent.

        • irelephant [he/him]🍭@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 days ago

          I think ai images are neat, and ethically questionable.

          When people use the images and act like they’re really deep, or pretend they prove something (like how it made a picture with the prompt “Democrat Protesters” cry). its annoying.

      • MousePotatoDoesStuff@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        Wow. Where was this Wikipedia page when I was writing my MSc thesis?

        Alternatively, how did I manage to graduate with research skills so bad that I missed it?

    • kadup@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      2 days ago

      If the people addicted to AI could read and interpret a simple sentence, they’d be very angry with your comment

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        2 days ago

        Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything “already debunked strawmen”.

    • Dragon@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      There is not really much “AI written code” but there is a lot of AI-assisted code.

  • frezik@midwest.social
    link
    fedilink
    English
    arrow-up
    49
    ·
    2 days ago

    The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.

    That should be a big warning sign that the next generation of developers are not going to be very good. If they’re waist deep in AI slop, they’re only going to learn how to deal with AI slop.

    As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

    What I’m feeling after reading that must be what artists feel like when AI slop proponents tell them “we’re making art accessible”.

    • dwemthy@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      Watched a junior dev present some data operations recently. Instead of just showing the sql that worked they copy pasted a prompt into the data platform’s assistant chat. The SQL it generated was invalid so the dev simply told it “fix” and it made the query valid, much to everyone’s amusement.

      The actual column names did not reflect the output they were mapped to, there’s no way the nicely formatted results were accurate. Average duration column populated the total count output. Junior dev was cheerfully oblivious. It produced output shaped like the goal so it must have been right

    • CodexArcanum@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      In so many ways, LLMs are just the tip of the iceberg of bad ideology in software development. There have always been people that come into the field and develop heinously bad habits. Whether it’s the “this is just my job, the only thing I think about outside work is my family” types or the juniors who only know how to copy paste snippets from web forums.

      And look, I get it. I don’t think 60-80 hour weeks are required to be successful. But I’m talking about people who are actively hostile to their own career paths, who seem to hate programming except that it pays good and let’s them raise families. Hot take: that sucks. People selfishly obsessed with their own lineage and utterly incurious about the world or the thing they spend 8 hours a day doing suck, and they’re bad for society.

      The juniors are less of a drain on civilization because they at least can learn to do better. Or they used to could, because as another reply mentioned, there’s no path from LLM slop to being a good developer. Not without the intervention of a more experienced dev to tell them what’s wrong with the LLM output.

      It takes all the joy out of the job too, something they’ve been working on for years. What makes this work interesting is understanding people’s problems, working out the best way to model them, and building towards solutions. What they want the job to be is a slop factory: same as the dream of every rich asshole who thinks having half an idea is the same as working for years to fully realize an idea in all it’s complexity and wonder.

      They never have any respect for the work that takes because they’ve never done any work. And the next generation of implementers are being taught that there are no new ideas. You just ask the oracle to give you the answer.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      When they say “art” they mean “metaphorical lead paint” and when they say “accessible” they mean “insidiously inserted into your neural pathways”

    • Croquette@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Art is already accessible. Plenty of artists that sells their art dirt cheap, or you can buy pen and papers at the dollar store.

      What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.

        …and what their opposition means when they oppose it is “this line of work was supposed to be totally immune to automation, and I’m mad that it turns out not to be.”

        • Croquette@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          There is already a lot of automation out there, and more is better, when used correctly. And that’s not talking about the outright theft of the material from these artists it is trying to replace so badly.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          …and this opposition means that our disagreements can only be perceived through the lens of personal faults.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        I think they also want recognition/credit for spending 5 minutes (or less) typing some words at an image generator as if that were comparable to people who develop technical skills and then create effortful meaningful work just because the outputs are (superficially) similar.

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I dunno. I feel like the programmers who came before me could say the same thing about IDEs, Stack Overflow, and high level programming languages. Assembly looks like gobbledygook to me and they tell me I’m a Senior Dev.

      If someone uses ChatGPT like I use StackOverflow, I’m not worried. We’ve been stealing code from each other since the beginning.“Getting the answer” and then having to figure out how to plug it into the rest of the code is pretty much what we do.

      There isn’t really a direct path from an LLM to a good programmer. You can get good snippets, but “ChatGPT, build me a app” will be largely useless. The programmers who come after me will have to understand how their code works just as much as I do.

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        LLM as another tool is great. LLM to replace experienced coders is a nightmare waiting to happen.

        IDEs, stack overflow, they are tools that makes the life of a developers a lot easier, they don’t replace him.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      All the newbs were just copying lines from stackexchange before AI. The only real difference at this point is that the commenting is marginally better.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 days ago

        Stack Overflow is far from perfect, but at least there is some level of vetting going on before it’s copypasta’d.

  • 🍪CRUMBGRABBER🍪@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 days ago

    Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can’t bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM’s will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      2 days ago

      You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Just gonna warn you that if you’re joking, you should add an /s or jk or something. And, if you’re joking, but you don’t add that /s or jk, don’t be hostile if someone calls you out.

      • AeonFelis@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        No. Never mark your satire. If some doesn’t get it, make your reply one SSU[1] higher. Repeat until they are forced to get it.


        1. Standard Sarcasm Unit ↩︎

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Well that took a turn.

      The initial comment spawned a GIANT THREAD which I haven’t had time to parse, but after 2 days or so the initiator (username Kerrick) ragequit.

      Here’s the modlog message (timestamp 2025-05-15 23:02)

      User 7u026ne9se

      Action: Banned

      Reason: Was ~Kerrick from kerrick.blog who picked and lost every fight in /s/gkpmli, deleted/disowned all his comments, lied about stalking bc someone told a maintainer he admitted to misleading them, and asked us to delete his username change from the modlog. No.

      Here’s the old profile:

      http://web.archive.org/web/20250312141826/https://lobste.rs/~Kerrick

      I suggest open source projects keep an eye out for this username and maybe take an extra look at their contributions.

      Edit found the last comment they made before deleting. Textbook DARVO, considering they have almost unmerited amounts of positive karma in the thread itself.

      I came here to give a simple explanation of why people aren’t noticing as many open source vibe coded contributions as they’d expect. Fights were picked with me by others: I was called a sneak, incapable, a pedant, an ignorer of consent, and a threat to human expression. All through that I’ve worked extremely hard to steer it away from such abhorrent behavior and towards the free expression of ideas, rather than engaging in the same kind of name calling.

      Even so, I’ve been emailed, text messaged, and even called on my cell phone about this thread. Someone stalked me to other social media to bring it up there. This thread has brought about the most toxicity I’ve ever experienced on any forum, and these last couple days have been the among the worst in my life.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      Lmao so many people telling on themselves in that thread. “I don’t get it, I regularly poison open source projects with LLM code!”

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      This discussion has made it clear to me that LLM enthusiasts do not value the time or preferences of open-source maintainers, willfully do not understand affirmative consent, and that I should take steps to explicitly ban the use of such tools in the open source projects I maintain.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Additional warning: their indentation style is not (as) mobile friendly (as it is here)

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 days ago

    Baldur Bjarnason’s given his thoughts on Bluesky:

    My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

    I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 days ago

    The headlines said that 30% of code at Microsoft was AI now! Huge if true!

    Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      yeah, the “some projects” bit is applicable, as is the “machine generated” phrasing

      @gsuberland pointed out elsewhere on fedi just how much of the VS-/MS- ecosystem does an absolute fucking ton of code generation

      (which is entirely fine, ofc. tons of things do that and it exists for a reason. but there’s a canyon in the sand between A and B)

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 days ago

        All compiled code is machine generated! BRB gonna clang and IPO, bye awful.systems! Have fun being poor

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          No joke, you probably could make tweaks to LLVM, call it “AI”, and rake in the VC funds.

                • frezik@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  2 days ago

                  For some definition of “happiness”, yes. It’s increasingly clear that the only way to get ahead is with some level of scam. In fact, I’m pretty sure Millennials will not be able to retire to a reasonable level of comfort without accepting some amount of unethical behavior to get there. Not necessarily Slipp’n Jimmy levels of scam, but just stuff like participating in a basic stock market investment with a tax advantaged account.

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      30% of code is standard boilerplate: setters, getters, etc that my IDE builds for me without calling it AI. It’s possible the claim is true, but it’s terribly misleading at best.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        2 days ago
        1. Perhaps you didn’t read the linked article. Nadella didn’t claim that 30% of MS’s code was written by AI. What he said was garbled up to the eventual headline.
        2. We don’t have to play devil’s advocate for a hyped-up headline that misquotes what an AI glazer said, dawg.
        3. “Existing code generation codes can write 30%” doesn’t imply that AI possibly/plausibly wrote 30% of MS’s code. There’s no logical connection. Please dawg, I beg you, think critically about this.
          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 days ago

            Man. If this LLM stuff sticks around, we’ll have an epidemic of early onset dementia.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              10
              ·
              2 days ago

              If the stories lf covid related cognitive decline are aue we are going to have a great time. Worse than lead paint.

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                11
                ·
                2 days ago

                “Oh man, this brain fog I have sure makes it hard to think. Guess I’ll use my trusty LLM! ChatGPT says lead paint is tastier and better for your brain than COVID? Don’t mind if I do!”

                • Soyweiser@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  2 days ago

                  I’m on a diet of rocks, glue on my pizza, lead paint, and covid infections, according to Grok this is called the Mr Burns method which should prevent diseases, as they all work together to block all bad impulses. Can’t wait to try this new garlic oil I made, using LLM instructions. It even had these cool bubbles while fermenting, nature is great.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              2 days ago

              I’ve been beating this drum for like 4~5y but: I don’t think the tech itself is going anywhere. published, opensourced, etc etc - the bell can’t be unrung, the horses have departed the stable

              but

              I do also argue that an extremely large amount of wind in the sails right now is because of the constellation of VC/hype//etc shit

              can’t put a hard number on this, but … I kind see a very massive reduction; in scope, in competence, in relevance. so much of this shit (esp. the “but my opensource model is great!” flavour) is so fucking reliant on “oh yeah this other entity had a couple fuckpiles of cash with which to train”, and once that (structurally) evaporates…

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      this post has also broken containment in the wider world, the video’s got thousands of views, I got 100+ subscribers on youtube and another $25/mo of patrons

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      the prompt-related pivots really do bring all the chodes to the yard

      and they’re definitely like “mine’s better than yours”

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        2 days ago

        The latest twist I’m seeing isn’t blaming your prompting (although they’re still eager to do that), it’s blaming your choice of LLM.

        “Oh, you’re using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren’t trying the right models, so allow me to educate you with all my prompt fondling experience. You’re trying to make some general point? Clearly you just need to try another model.”

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Unlike the PHP hammer, the banhammer is very useful for a lot of things. Especially sealion clubbing.

  • vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    38
    ·
    2 days ago

    No the fuck it’s not

    I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.

    People who think AI codes well are shit at their job

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      29
      ·
      2 days ago

      In my workflow there is no difference between LLMs and fucking grep for me.

      Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.

      (Or ripgrep rather)

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 days ago

          (I don’t mean to take aim at you with this despite how irked it’ll sound)

          I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])

          that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression

          [0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            2 days ago

            Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it -e? -r? -i? man grep? More like, man, get grep the hell outta here!

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              11
              ·
              2 days ago

              now listen, you might think gnu tools are offensively inconsistent, and to that I can only say

              find(1)

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                12
                ·
                2 days ago

                find(1)? You better find(1) some other place to be, buster. In this house, we use the file explorer search bar

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                2 days ago

                If I start using this and add grep functionality to my day-to-day life, I can’t complain about not knowing how to invoke grep in good conscience, dawg. I can’t hold my shitposting back like that, dawg!

                jk that looks useful. Thanks!

                • lagoon8622@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  2 days ago

                  The cheatsheet and tealdeer projects are awesome. It’s one of my (many) favorite things about the user experience honestly. Really grateful for those projects

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG

        But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          17
          ·
          2 days ago

          If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

          • vivendi@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            You need to run the model yourself and heavily tune the inference, which is why you haven’t heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?

            I run my own local models with my own inference, which really helps. There are online communities you can join (won’t link bcz Reddit) where you can learn how to do it too, no need to take my word for it

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 days ago

          The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.

          And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.

          I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            12
            ·
            2 days ago

            We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.

          • vivendi@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            O3 is trash, same with closedAI

            I’ve had the most success with Dolphin3-Mistral 24B (open model finetuned on open data) and Qwen series

            Also lower model temperature if you’re getting hallucinations

            For some reason everyone is still living in 2023 when AI is remotely mentioned. There is a LOT you can criticize LLMs for, some bullshit you regurgitate without actually understanding isn’t one

            You also don’t need 10x the resources where tf did you even hallucinate that from

              • vivendi@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.

                If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer

                There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  2 days ago

                  My most honest goal is to educate people

                  oh and I suppose you can back that up with verifiable facts, yes?

                  and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?

                  sounds very hard. managing your calendar must be quite a skill

            • scruiser@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              2 days ago

              GPT-1 is 117 million parameters, GPT-2 is 1.5 billion parameters, GPT-3 is 175 billion, GPT-4 is undisclosed but estimated at 1.7 trillion. Token needed for training and training compute scale linearly (edit: actually I’m wrong, looking at the wikipedia page… so I was wrong, it is even worse for your case than I was saying, training compute scales quadratically with model size, it is going up 2 OOM for every 10x of parameters) with model size. They are improving … but only getting a linear improvement in training loss for a geometric increase in model size, training time. A hypothetical GPT-5 would have 10 trillion training parameters and genuinely need to be AGI to have the remotest hope of paying off it’s training. And it would need more quality tokens than they have left, they’ve already scrapped the internet (including many copyrighted sources and sources that requested not to be scrapped). So that’s exactly why OpenAI has been screwing around with fine-tuning setups with illegible naming schemes instead of just releasing a GPT-5. But fine-tuning can only shift what you’re getting within distribution, so it trades off in getting more hallucinations or overly obsequious output or whatever the latest problem they are having.

              Lower model temperatures makes it pick it’s best guess for next token as opposed to randomizing among probable guesses, they don’t improve on what the best guess is and you can still get hallucinations even picking the “best” next token.

              And lol at you trying to reverse the accusation against LLMs by accusing me of regurgitating/hallucinating.

              • vivendi@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.

                Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.

                ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.

                Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 days ago

            God, this cannot be overstated. An LLM’s sole function is to hallucinate. Anything stated beyond that is overselling.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 days ago

        These views on LLMs are simplistic. As a wise man once said, “check yoself befo yo wreck yoself”, I recommend more education thus

        LLM structures arw over hyped, but they’re also not that simple

        • MonkderVierte@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 days ago

          From what i know from recent articles about retracing LLM indepth, they are indeed best suited for language translation and perfectly explain the halucinations. And i think i’ve read somewhere that this was the originally intended purpose of the tech?

          Ah, here, and here more tabloid-ish.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            many of the proponents of things in this field will propose/argue $x thing to be massively valuable for $x

            thing is, that doesn’t often work out

            yes, there’s some value in the tech for translation outcomes. to anyone even mildly online, “so are language teaching apps/sites using this?” is probably a very nearby question. and rightly so!

            and then when you go digging into how that’s going in practice, wow fuck damn doesn’t that Glorious AI Future sheen just fall right off…

  • ☂️-@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 days ago

    i use it to write simple boilerplate for myself, and it works most of the time. does it count?

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    2 days ago

    Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.

    I don’t think it’s entirely useless as it is, it’s just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It’s artificial because it is being developed to artificially fulfill prompts, which they do succeed at.

    When people do develop true intelligence-on-demand, you’ll know because you will lose your job, not simply have another tool at your disposal. The prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      My opinion is it can be good when used narrowly.

      Write a concise function that takes these inputs, does this, and outputs a dict with this information.

      But so often it wants to be overly verbose. And it’s not so smart as to understand much of the project for any meaningful length of time. So it will redo something that already exists. It will want to touch something that is used in multiple places without caring or knowing how it’s used.

      But it still takes someone to know how the puzzle pieces go together. To architect it and lay it out. To really know what the inputs and outputs need to be. If someone gives it free reign to do whatever, it’ll just make slop.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        ·
        2 days ago

        That’s the problem, isn’t it? If it can only maybe be good when used narrowly, what’s the point? If you’ve managed to corner a subproblem down to where an LLM can generate the code for it, you’ve already done 99% of the work. At that point you’re better off just coding it yourself. At that point, it’s not “good when used narrowly”, it’s useless.

        • brygphilomena@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          It’s a tool. It doesn’t replace a programmer. But it makes writing some things faster. Give any tool to an idiot and they’ll fuck things up. But a craftsman can use it to make things a little faster, because they know when and how to use it. And more importantly when not to use it.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            13
            ·
            2 days ago

            The “tool” branding only works if you formulate it like this: in a world where a hammer exists and is commonly used to force nails into solid objects, imagine another tool that requires you to first think of shoving a nail into wood. You pour a few bottles of water into the drain, whisper some magic words, and hope that the tool produces the nail forcing function you need. Otherwise you keep pouring out bottles of water and hoping that it does a nail moving motion. It eventually kind of does it, but not exactly, so you figure out a small tweak which is to shove the tool at the nail at the same time as it does its action so that the combined motion forces the nail into your desired solid. Do you see the problem here?

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 days ago

            It’s a tool.

            (if you persist to stay with this dogshit idiotic “opinion”:) please crawl into a hole and stay there

            fucking what the fuck is with you absolute fucking morons and not understand the actual literal concept of tools

            read some fucking history goddammit

            (hint: the amorphous shifting blob, with a non-reliable output, not a tool; alternative, please, go off about how using a php hammer is definitely the way to get a screw in)

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        2 days ago

        There’s something similar going on with air traffic control. 90% of their job could be automated (and it has been technically feasible to do so for quite some time), but we do want humans to be able to step in when things suddenly get complicated. However, if they’re not constantly practicing those skills, then they won’t be any good when an emergency happens and the automation gets shut off.

        The problem becomes one of squishy human psychology. Maybe you can automate 90% of the job, but you intentionally roll that down to 70% to give humans a safe practice space. But within that difference, when do you actually choose to give the human control?

        It’s a tough problem, and the benefits to solving it are obvious. Nobody has solved it for air traffic control, which is why there’s no comprehensive ATC automation package out there. I don’t know that we can solve it for programmers, either.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        My opinion is it can be good when used narrowly.

        ah, as narrowly as I intend to regard your opinion? got it

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    130
    ·
    3 days ago

    As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

    This is the most entertaining thing I’ve read this month.

    • makeshiftreaper@lemmy.world
      link
      fedilink
      English
      arrow-up
      62
      ·
      2 days ago

      I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn’t publish my work

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      yeah someone elsewhere on awful linked issue a few days ago, and throughout many of his posts he pulls that kind of stunt the moment he gets called on his shit

      he also wrote a 21.KiB screed very huffily saying one of the projects’ CoC has failed him

      long may his PRs fail

  • Maxxie
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    I use gpt to give me snippets of code (not in my ide, I use neovim btw), check my stuff for typos/logical errors, suggest solutions to some problems, debugging, and honestly I kinda love it. I was learning programming on my own in 2010s, and this is so much better than crawling over wikis/stackoverflow. At least for me, now, when I already have an intuition for what is a good code.

    Anyone who says llm will replace programmers in 1-2 years is either stupid or a grifter.

    • irelephant [he/him]🍭@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I generally try to avoid it, as a lot can be learned from trying to fix weird bugs, but I did recently have a 500 line soup code vue component, and I used chatgpt to try to fix it. It didn’t fix the issue, and it made up 2 other issues.
      I eventually found the wrongly-inverted angle bracket.

      My point is, its useful if you try to learn from it, though its a shit teacher.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        as a lot can be learned from trying to fix weird bugs

        a truism, but not one I believe many of our esteemed promptfuckers could appreciate

    • Rin@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      i think you’re spot on. I don’t see anything against asking gpt programming questions, verifying it’s not full of shit and adding it to an already existing codebase.

      The only thing I have a problem with is people blindly trusting AI, which clearly is something you’re not doing. People downvoting you have either never written code or have room temp iq in ºC.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        you’re back! and still throwing a weird tantrum over LLMs and downvotes on Lemmy of all things. let’s fix both those things right now!