• 1024_Kibibytes@lemm.ee
      link
      fedilink
      arrow-up
      119
      ·
      2 months ago

      That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.

        • redd@discuss.tchncs.de
          link
          fedilink
          arrow-up
          25
          ·
          2 months ago

          Not only internet. Soon everybody will use AI for everything. Lawyers will use AI in court on both sides. AI will fight against AI.

          • devfuuu@lemmy.world
            link
            fedilink
            arrow-up
            26
            ·
            edit-2
            2 months ago

            I was at a coffee shop the other day and 2 lawyers were discussing how they were doing stuff with ai that they didn’t know anything about and then just send to their clients.

            That shit scared the hell out of me.

            And everything will just keep getting worse with more and more common folk eating the hype and brainwash using these highly incorrect tools in all levels of our society everyday to make decisions about things they have no idea about.

            • NABDad@lemmy.world
              link
              fedilink
              English
              arrow-up
              16
              ·
              2 months ago

              I’m aware of an effort to get LLM AI to summarize medical reports for doctors.

              Very disturbing.

              The people driving it where I work tend to be the people who know the least about how computers work.

          • Telorand@reddthat.com
            link
            fedilink
            arrow-up
            9
            ·
            2 months ago

            It was a time of desolation, chaos, and uncertainty. Brother pitted against brother. Babies having babies.

            Then one day, from the right side of the screen, came a man. A man with a plastic rectangle.

      • atomicbocks@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 months ago

        The Internet will continue to function just fine, just as it has for 50 years. It’s the World Wide Web that is on fire. Pretty much has been since a bunch of people who don’t understand what Web 2.0 means decided they were going to start doing “Web 3.0” stuff.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          ·
          2 months ago

          The Internet will continue to function just fine, just as it has for 50 years.

          Sounds of intercontinental data cables being sliced

      • josefo@leminal.space
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        That would only happen if we give power to our ai assistants to buy things on our behalf, and manage our budgets. They will decide among themselves who needs what and the money will flow to billionaires pockets without any human intervention. If humans go far enough, not even rich people would be rich, as trust funds, stock portfolios would operate under ai. If the ai achieves singularity with that level of control, we are all basically in spectator mode.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    166
    ·
    2 months ago

    AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we’re more than just monkeys with typewriters.

      • toynbee@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        2 months ago

        I was going to post a note about typewriters, allegedly from Tom Hanks, which I saw years and years ago; but I can’t find it.

        Turns out there’s a lot of Tom Hanks typewriter content out there.

        • 3DMVR@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 months ago

          He donated his to my hs randomly, it was supposed to goto the valedictorian but the school kept it lmao, it was so funny because they showed everyone a video where he says not to keep the typewriter and its for a student

      • ☂️-@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        i have a mobile touchscreen typewriter, but it isn’t very effective at writing code.

      • xthexder@l.sw0.com
        link
        fedilink
        arrow-up
        42
        ·
        2 months ago

        But then they’d have a dev team who wrote the code and therefore knows how it works.

        In this case, the hackers might understand the code better than the “author” because they’ve been working in it longer.

      • merthyr1831@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        True, any software can be vulnerable to attack.

        but the difference is a technical team of software developers can mitigate an attack and patch it. This guy has no tech support than the AI that sold him the faulty code that likely assumed he did the proper hardening of his environment (which he did not).

        Openly admitting you programmed anything with AI only is admitting you haven’t done the basic steps to protecting yourself or your customers.

  • ✨🫐🌷🌱🌌🌠🌌🌿🪻🥭✨@sh.itjust.worksBanned from community
    link
    fedilink
    English
    arrow-up
    136
    ·
    2 months ago

    Hilarious and true.

    last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.

    What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.

    • Thoven@lemdro.id
      link
      fedilink
      English
      arrow-up
      59
      ·
      2 months ago

      100% this. I’ve gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I’ve probably done this dozens of times by now. No bites yet. All those millions wasted…

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      27
      ·
      2 months ago

      I’ve been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I’m doing (and have used cursor and just deleted most of what it made for me when I let it run)

      But my frontends have never looked better.

  • M0oP0o@mander.xyz
    link
    fedilink
    arrow-up
    109
    ·
    2 months ago

    Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.

    Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    102
    ·
    2 months ago

    “If you don’t have organic intelligence at home, store-bought is fine.” - leo (probably)

  • Electric@lemmy.world
    link
    fedilink
    arrow-up
    57
    ·
    2 months ago

    Is the implication that he made a super insecure program and left the token for his AI thing in the code as well? Or is he actually being hacked because others are coping?

    • grue@lemmy.world
      link
      fedilink
      arrow-up
      151
      ·
      2 months ago

      Nobody knows. Literally nobody, including him, because he doesn’t understand the code!

    • Mayor Poopington@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 months ago

      AI writes shitty code that’s full of security holes, and Leo here has probably taken zero steps to further secure his code. He broadcasts his AI written software and its open season for hackers.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 months ago

        Not just, but he literally advertised himself as not being technical. That seems to be just asking for an open season.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      Potentially both, but you don’t really have to ask to be hacked. Just put something into the public internet and automated scanning tools will start checking your service for popular vulnerabilities.

    • JustAnotherKay@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      2 months ago

      He told them which AI he used to make the entire codebase. I’d bet it’s way easier to RE the “make a full SaaS suite” prompt than it is to RE the code itself once it’s compiled.

      Someone probably poked around with the AI until they found a way to abuse his SaaS

    • RedditWanderer@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      Doesn’t really matter. The important bit is he has no idea either. (It’s likely the former and he’s blaming the weirdos trying to get in)

  • rekabis@programming.dev
    link
    fedilink
    arrow-up
    56
    ·
    2 months ago

    The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      2 months ago

      Plenty of good programmers use AI extensively while working. Me included.

      Mostly as an advance autocomplete, template builder or documentation parser.

      You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.

      Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.

      • Nalivai@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        2 months ago

        I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
        I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          2 months ago

          That’s why you use unit test and integration test.

          I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.

          Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.

          As I said, just a tool like many other before it.

          I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.

          It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.

          I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.

          Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.

          • Nalivai@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            That’s why you use unit test and integration test.

            Good start, but not even close to being enough. What if code introduces UB? Unless you specifically look for that, and nobody does, neither unit nor on-target tests will find it. What if it’s drastically ineffective? What if there are weird and unusual corner cases?
            Now you spend more time looking for all of that and designing tests that you didn’t need to do if you had proper practices from the beginning.

            It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.

            But that’s worse! You do realise how that’s worse, right? You lose all the external ways to validate the code, now you have to treat all the code as malicious.

            For instance, to seek for specific functions in C# extensive libraries.

            And spend twice as much time trying to understand why can’t you find a function that your LLM just invented with absolute certainty of a fancy autocomplete. And if that’s an easy task for you, well, then why do you need this middle layer of randomness. I can’t think of a reason why not to search in the documentation instead of introducing this weird game of “will it lie to me”

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 month ago

              Any human written code can and will introduce UB.

              Also I don’t see how you will take more that 5 second to verify that a given function does not exist. It has happen to me, llm suggesting unexisting function. And searching by function name in the docs is instantaneous.

              I you don’t want to use it don’t. I have been more than a year doing so and I haven’t run into any of those catastrophic issues. It’s just a tool like many others I use for coding. Not even the most important, for instance I think LSP was a greater improvement on my coding efficiency.

              It’s like using neovim. Some people would post me a list of all the things that can go bad for making a Frankenstein IDE in a ancient text editor. But if it works for me, it works for me.

              • Nalivai@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 month ago

                Any human written code can and will introduce UB.

                And there is enormous amount of safeguards, tricks, practices and tools we come up with to combat it. All of those are categorically unavailable to an autocomplete tool, or a tool who exclusively uses autocomplete tool to code.

                Also I don’t see how you will take more that 5 second to verify that a given function does not exist. It has happen to me, llm suggesting unexisting function. And searching by function name in the docs is instantaneous.

                Which means you can work with documentation. Which means you really, really don’t need the middle layer, like, at all.

                I haven’t run into any of those catastrophic issues.

                Glad you didn’t, but also, I’ve reviewed enough generated code to know that a lot of the time people think they’re OK, when in reality they just introduced an esoteric memory leak in a critical section. People who didn’t do it by themselves, but did it because LLM told them to.

                I you don’t want to use it don’t.

                It’s not about me. It’s about other people introducing shit into our collective lives, making it worse.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 month ago

                  You can actually apply those tools and procedures to automatically generated code, exactly the same as in any other piece of code. I don’t see the impediment here…

                  You must be able to understand that searching by name is not the same as searching by definition, nothing more to add here…

                  Why would you care of the shit code submitted to you is bad because it was generated with AI, because it was copied from SO, or if it’s brand new shit code written by someone. If it’s bad is bad. And bad code have existed since forever. Once again, I don’t see the impact of AI here. If someone is unable to find that a particular generated piece of code have issues, I don’t see how magically is going to be able to see the issue in copypasted code or in code written by themselves. If they don’t notice they don’t, no matter the source.

                  I will go back to the Turing test. If you don’t even know if the bad code was generated, copied or just written by hand, how are you even able to tell that AI is the issue?

        • HumanPerson@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          There is an exception to this I think. I don’t make ai write much, but it is convenient to give it a simple Java class and say “write a tostring” and have it spit out something usable.

    • millie@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 months ago

      Depending on what it is you’re trying to make, it can actually be helpful as one of many components to help get your feet wet. The same way modding games can be a path to learning a lot by fiddling with something that’s complete, getting suggestions from an LLM that’s been trained on a bunch of relevant tutorials can give you enough context to get started. It will definitely hallucinate, and figuring out when it’s full of shit is part of the exercise.

      It’s like mid-way between rote following tutorials, modding, and asking for help in support channels. It isn’t as rigid as the available tutorials, and though it’s prone to hallucination and not as knowledgeable as support channel regulars, it’s also a lot more patient in many cases and doesn’t have its own life that it needs to go live.

      Decent learning tool if you’re ready to check what it’s doing step by step, look for inefficiencies and mistakes, and not blindly believe everything it says. Just copying and pasting while learning nothing and assuming it’ll work, though? That’s not going to go well at all.

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.

  • formulaBonk@lemm.ee
    link
    fedilink
    English
    arrow-up
    45
    ·
    2 months ago

    Reminds me of the days before ai assistants where people copy pasted code from forums and then you’d get quesitions like “I found this code and I know what every line does except this ‘for( int i = 0; i < 10; i ++)’ part. Is this someone using an unsupported expression?”

      • Moredekai@lemmy.world
        link
        fedilink
        arrow-up
        39
        ·
        2 months ago

        It’s a standard formatted for-loop. It’s creating the integer variable i, and setting it to zero. The second part is saying “do this while i is less than 10”, and the last part is saying what to do after the loop runs once -‐ increment i by 1. Under this would be the actual stuff you want to be doing in that loop. Assuming nothing in the rest of the code is manipulating i, it’ll do this 10 times and then move on

        • Fermion@feddit.nl
          link
          fedilink
          arrow-up
          6
          ·
          2 months ago

          I would also add that usually i will be used inside the code block to index locations within whatever data structures need to be accessed. Keeping track of how many times the loop has run has more utility than just making sure something is repeated 10 times.

      • jqubed@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        2 months ago

        @Moredekai@lemmy.world posted a detailed explanation of what it’s doing, but just to chime in that it’s an extremely basic part of programming. Probably a first week of class if not first day of class thing that would be taught. I haven’t done anything that could be considered programming since 2002 and took my first class as an elective in high school in 2000 but still recognize it.

      • JustAnotherKay@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        2 months ago

        for( int i = 0; i < 10; i ++)

        This reads as “assign an integer to the variable I and put a 0 in that spot. Do the following code, and once completed add 1 to I. Repeat until I reaches 10.”

        Int I = 0 initiates I, tells the compiler it’s an integer (whole number) and assigns 0 to it all at once.

        I ++ can be written a few ways, but they all say “add 1 to I”

        I < 10 tells it to stop at 10

        For tells it to loop, and starts a block which is what will actually be looping

        Edits: A couple of clarifications

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      i <= 9, you heathen. Next thing you’ll do is i < INT_MAX + 1 and then the shit’s steaming.

      I’m cooked, see thread.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          I mean i < 10 isn’t wrong as such, it’s just good practice to always use <= because in the INT_MAX case you have to and everything should be regular because principle of least astonishment: That 10 might become a #define FOO 10, that then might become #define FOO INT_MAX, each of those changes look valid in isolation but if there’s only a single i < FOO in your codebase you introduced a bug by spooky action at a distance. (overflow on int is undefined behaviour in C, in case anyone is wondering what the bug is).

          …never believe anyone who says “C is a simple language”. Their code is shoddy and full of bugs and they should be forced to write Rust for their own good.

          • kevincox@lemmy.ml
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            2 months ago

            But your case is wrong anyways because i <= INT_MAX will always be true, by definition. By your argument < is actually better because it is consistent from < 0 to iterate 0 times to < INT_MAX to iterate the maximum number of times. INT_MAX + 1 is the problem, not < which is the standard to write for loops and the standard for a reason.

            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              2 months ago

              You’re right, that’s what I get for not having written a line of C in what 15 years. Bonus challenge: write for i in i32::MIN..=i32::MAX in C, that is, iterate over the whole range, start and end inclusive.

              (I guess the ..= might be where my confusion came from because Rust’s .. is end-exclusive and thus like <, but also not what you want because i32::MAX + 1 panics).

                • barsoap@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  Would you be bold enough to write if (i++ == INT_MAX) break? The result of the increment is never used, but an increment is being done, at least syntactically, and it overflows, at least theoretically, so maybe (I’m not 100% sure) the compiler could be allowed to break out into song because undefined behaviour allows anything to happen.

  • Nangijala@feddit.dk
    link
    fedilink
    arrow-up
    36
    ·
    2 months ago

    This feels like the modern version of those people who gave out the numbers on their credit cards back in the 2000s and would freak out when their bank accounts got drained.

  • Takumidesh@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    2 months ago

    This is satire / trolling for sure.

    LLMs aren’t really at the point where they can spit out an entire program, including handling deployment, environments, etc. without human intervention.

    If this person is ‘not technical’ they wouldn’t have been able to successfully deploy and interconnect all of the pieces needed.

    The AI may have been able to spit out snippets, and those snippets may be very useful, but where it stands, it’s just not going to be able to, with no human supervision/overrides, write the software, stand up the DB, and deploy all of the services needed. With human guidance sure, but with out someone holding the AIs hand it just won’t happen (remember this person is ‘not technical’)

    • ✨🫐🌷🌱🌌🌠🌌🌿🪻🥭✨@sh.itjust.worksBanned from community
      link
      fedilink
      English
      arrow-up
      25
      ·
      2 months ago

      idk ive seen some crazy complicated stuff woven together by people who cant code. I’ve got a friend who has no job and is trying to make a living off coding while, for 15+ years being totally unable to learn coding. Some of the things they make are surprisingly complex. Tho also, and the person mentioned here may do similarly, they don’t ONLY use ai. They use Github alot too. They make nearly nothing themself, but go thru github and basically combine large chunks of code others have made with ai generated code. Somehow they do it well enough to have done things with servers, cryptocurrency, etc… all the while not knowing any coding language.

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      ·
      2 months ago

      Claude code can make something that works, but it’s kinda over engineered and really struggles to make an elegant solution that maximises code reuse - it’s the opposite of DRY.

      I’m doing a personal project at the moment and used it for a few days, made good progress but it got to the point where it was just a spaghetti mess of jumbled code, and I deleted it and went back to implementing each component one at a time and then wiring them together manually.

      My current workflow is basically never let them work on more than one file at a time, and build the app one component at a time, starting at the ground level and then working in, so for example:

      Create base classes that things will extend, Then create an example data model class, iterate on that architecture A LOT until it’s really elegant.

      Then Ive been getting it to write me a generator - not the actual code for models,

      Then (level 3) we start with be UI.layer, so now we make a UI kit the app will use and reuse for different components

      Then we make a UI component that will be used in a screen. I’m using flutter as an example so It would be a stateless component

      We now write tests for the component

      Now we do a screen, and I import each of the components.

      It’s still very manual, but it’s getting better. You are still going to need a human cider, I think forever, but there are two big problems that aren’t being addressed because people are just putting their head in the sand and saying nah can’t do it, or the clown op in the post who thinks they can do it.

      1. Because dogs be clownin, the public perception of programming as a career will be devalued “I’ll just make it myself!” Or like my rich engineer uncle said to me when I was doing websites professionally - a 13 year old can just make a website, why would I pay you so much to do it. THAT FUCKING SUCKS. But a similar attitude has existed from people “I’ll just hire Indians”. This is bullshit, but perception is important and it’s going to require you to justify yourself for a lot more work.

      2. And this is the flip side good news. These skills you have developed - it’s is going to be SO MUCH FUCKING HARDER TO LEARN THEM. When you can just say “hey generate me an app that manages customers and follow ups” and something gets spat out, you aren’t going to investigate the grind required to work out basic shit. People will simply not get to the same level they are now.

      That logic about how to scaffold and architect an app in a sensible way - USING AI TOOLS - is actually the new skillset. You need to know how to build the app, and then how to efficiently and effectively use the new tools to actually construct it. Then you need to be able to do code review for each change.

      </rant>

    • nick@midwest.social
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      Mmmmmm no, Claude definitely is. You have to know what to ask it, but I generated and entire deadman’s switch daemon written in go in like an hour with it, to see if I could.

      • Takumidesh@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        2 months ago

        So you did one simple program.

        SaaS involves a suite of tooling and software, not just a program that you build locally.

        You need at a minimum, database deployments (with scaling and redundancy) and cloud software deployments (with scaling and redundancy)

        SaaS is a full stack product, not a widget you run on your local machine. You would need to deputize the AI to log into your AWS (sorry, it would need to create your AWS account) and fully provision your cloud infrastructure.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          4
          ·
          2 months ago

          Lol they don’t need scaling and redundancy to work. They just need scaling and redundancy to avoid being sued into oblivion when they lose all their customer data.

          As a full time AI hater, I fully believe that some code-specialized AI can write and maybe even deploy a full stack program, with basic input forms and CRUD, which is all you need to be a “saas”.

          It’s gonna suck, and be unmaintainable, and insecure, and fragile. But I bet it could do it and it’d work for a little while.

          • Maxxie
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            That’s not “working saas” tho.

            Its like calling hello world a “production ready CLI application”.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              arrow-up
              3
              ·
              2 months ago

              What makes it “working”, is that the Software part of Software as a Service, is available as a Service.

              The service doesn’t have to scale to a million users. It’s still a SaaS if it has one customer with like 4 users.

              Is this a pedantic argument? Yes.
              Are you starting a pedantic fight about the specific definition of SaaS? Also yes.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  2 months ago

                  Kinda.
                  Ignoring the pedantic take that nearly every website is a saas.
                  And the slightly less pedantic take that every interactive website is a saas

                  If your website is an app that does a thing that a user wants, it’s a saas.
                  Your website just does mpeg to gif transcoding? That’s a saas. Online text editor? SaaS. Online tamagotchi? SaaS.

                  If it doesn’t scale to the number of users who want or need to use it, then it’s not a very good SaaS. But SaaS it is.

    • Tja@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Might be satire, but I think some “products based on LLMs” (not LLMs alone) would be able to. There’s pretty impressive demos out there, but honestly haven’t tried them myself.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 months ago

      It’s further than you think. I spoke to someone today about and he told me it produced a basic SaaS app for him. He said that it looked surprisingly okay and the basic functionalities actually worked too. He did note that it kept using deprecated code, consistently made a few basic mistakes despite being told how to avoid it, and failed to produce nontrivial functionalies.

      He did say that it used very common libraries and we hypothesized that it functioned well because a lot of relevant code could be found on GitHub and that it might function significantly worse when encountering less popular frameworks.

      Still it’s quite impressive, although not surprising considering it was a matter of time before people would start to feed the feedback of an IDE back into it.

      • Takumidesh@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        2 months ago

        I’m skeptical. You are saying that your team has no hand in the provisioning and you deputized an AI with AWS keys and just let it run wild?

      • hubobes@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        2 months ago

        How? We try to adopt AI for dev work for years now and every time the next gen tool or model gets released it fails spectacularly at basic things. And that’s just the technical stuff, I still have no idea on how to tell it do implement our use cases as it simply does not understand the domain.

        It is great at building things other have already built and it could train on but we don’t really have a use case for that.

    • iAvicenna@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      My impression is that with some guidance it can put together a basic skeleton of complex stuff too. But you need a specialist level of knowledge to fix the fail at compile level mistakes or worse yet mistakes that compile but don’t at all achieve the intended result. To me it has been most useful at getting the correct arguments for argument heavy libraries like plotly, remembering how to do stuff in bash or learning something from scratch like 3js. Soon as you try to do something more complex than it can handle, it confidently starts cycling through the same couple of mistakes over and over. The key words it spews in those mistakes can sometimes be helpful to direct your search online though.

      So it has the potential to be helpful to a programmer but it cant yet replace programmers as tech bros like to fantasize about.

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    32
    ·
    2 months ago

    I took a web dev boot camp. If I were to use AI I would use it as a tool and not the motherfucking builder! AI gets even basic math equations wrong!

    • KyuubiNoKitsune
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      Can’t expect predictive text to be able to do math. You can get it to use a programming language to do it tho. If you ask it in a programmatic way it’ll generate and run it’s own code. Only way I got it to count the amount of r’s in strawrbrerry.