• LeFantome@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    Can Open Source defend against copyright claims for AI contributions?

    If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?

    • proton_lynx@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      what are the legal implications?

      It would be so fucking nice if we could use AI to bypass copyright claims.

  • notannpc@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    7 hours ago

    AI is at its most useful in the early stages of a project. Imagine coming to the fucking ssh project with AI slop thinking it has anything of value to add 😂

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      56 minutes ago

      The early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.

      AI is fucking useless at all of that.

      In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won’t order a bricklayer apprentice to draw the plan for a new house.

      And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.

      • MonkderVierte@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        18 minutes ago

        the best approach to write a first prototype in a language you know well

        Ok, writing a web browser in POSIX shell using yad now.

  • Prime@lemmy.sdf.org
    link
    fedilink
    arrow-up
    9
    ·
    9 hours ago

    Microsoft is doing this today. I can’t link it because I’m on mobile. It is in dotnet. It is not going well :)

  • oakey66@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    13 hours ago

    It’s not good because it has no context on what is correct or not. It’s constantly making up functions that don’t exist or attributing functions to packages that don’t exist. It’s often sloppy in its responses because the source code it parrots is some amalgamation of good coding and terrible coding. If you are using this for your production projects, you will likely not be knowledgeable when it breaks, it’ll likely have security flaws, and will likely have errors in it.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    52
    ·
    16 hours ago

    Have you used AI to code? You don’t say “hey, write this file” and then commit it as “AI Bot 123 aibot@company.com”.

    You start writing a method and get auto-completes that are sometimes helpful. Or you ask the bot to write out an algorithm. Or to copy something and modify it 30 times.

    You’re not exactly keeping track of everything the bots did.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 hours ago

        I’ll admit I skimmed most of that train wreak of an article - I think it’s pretty generous saying that it had a point. It’s mostly recounts of people complaining about AI. But if they hid something in there about it being remarkably useful in cases but not writing entire applications or features then I guess I’m on board?

        • HaraldvonBlauzahn@feddit.orgOP
          link
          fedilink
          arrow-up
          9
          ·
          14 hours ago

          Well, sometimes I think the web is flooded with advertising an spam praising AI. For these companies, it makes perfect sense because billions of dollars has been spent at these companies and they are trying to cash in before the tides might turn.

          But do you know what is puzzling (and you do have a point here)? Many posts that defend AI do not engage in logical argumentation but they argue beside the point, appeal to emotions or short-circuited argumentation that “new” always equals “better”, or claiming that AI is useful for coding as long as the code is not complex (compare that to the objection that mathematics is simple as long it is not complex, which is a red herring and a laughable argument). So, many thanks for you pointing out the above points and giving in few words a bunch of examples which underline that one has to think carefully about this topic!

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 hours ago

            The problem is that you really only see two sorts of articles.

            AI is going to replace developers in 5 years!

            AI sucks because it makes mistakes!

            I actually see a lot more of the latter response on social media to the point where I’m developing a visceral response to the phrase “AI slop”.

            Both stances are patently ridiculous though. AI cannot replace developers and it doesn’t need to be perfect to be useful. It turns out that it is a remarkably useful tool if you understand its limitations and use it in a reasonable way.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      12 hours ago

      Or to copy something and modify it 30 times.

      This seems like a very bad idea. I think we just need more lisp and less AI.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        12 hours ago

        “Hey AI - Create a struct that matches this JSON document that I get from a REST service”

        Bam, it’s done.

        Or

        "Hey AI - add a schema prefixed on all of the tables and insert statements in the SQL script.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    15 hours ago

    If humans are so good at coding, how come there are 8100000000 people and only 1500 are able to contribute to the Linux kernel?

    I hypothesize that AI has average human coding skills.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      The average coder is a junior, due to the explosive growth of the field (similar as in some fast-growing nations the average age is very young). Thus what is average is far below what good code is.

      On top of that, good code cannot be automatically identified by algorithms. Some very good codebases might look like bad at a superficial level. For example the code base of LMDB is very diffetent from what common style guidelines suggest, but it is actually a masterpiece which is widely used. And vice versa, it is not difficult to make crappy code look pretty.

  • andybytes@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 hours ago

    My theory is not a lot of people like this AI crap. They just lean into it for the fear of being left behind. Now you all think it’s just gonna fail and it’s gonna go bankrupt. But a lot of ideas in America are subsidized. And they don’t work well, but they still go forward. It’ll be you, the taxpayer, that will be funding these stupid ideas that don’t work, that are hostile to our very well-being.

  • teije9
    link
    fedilink
    arrow-up
    13
    ·
    16 hours ago

    who makes a contribution made by aibot514. noone. people use ai for open source contributions, but more in a ‘fix this bug’ way not in a fully automated contribution under the name ai123 way

    • lemmyng@lemmy.ca
      link
      fedilink
      English
      arrow-up
      33
      ·
      15 hours ago

      Counter-argument: If AI code was good, the owners would create official accounts to create contributions to open source, because they would be openly demonstrating how well it does. Instead all we have is Microsoft employees being forced to use and fight with Copilot on GitHub, publicly demonstrating how terrible AI is at writing code unsupervised.

  • andybytes@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    13 hours ago

    AI is just the lack of privacy, Authoritarian Dragnet, remote control over others computers, web scraping, The complete destruction of America’s art scene, The stupidfication of America and copyright infringement with a sprinkling of baby death.

  • HobbitFoot @thelemmy.club
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    As a dumb question from someone who doesn’t code, what if closed source organizations have different needs than open source projects?

    Open source projects seem to hinge a lot more on incremental improvements and change only for the benefit of users. In contrast, closed source organizations seem to use code more to quickly develop a new product or change that justifies money. Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?

    • David Gerard@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      Baldur Bjarnason (who hates AI slop) has posited precisely this:

      My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        That’s basically my question. If the standards of code are different, AI slop may be acceptable in one scenario but unacceptable in another.

    • bignose@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 hours ago

      Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?

      Because most software is internal to the organisation (therefore closed by definition) and never gets compared or used outside that organisation: Yes, I think that when that software barely works, it is taken as good enough and there’s no incentive to put more effort to improve it.

      My past year (and more) of programming business-internal applications have been characterised by upper management imperatives to “use Generative AI, and we expect that to make you nerd faster” without any effort spent to figure out whether there is any net improvement in the result.

      Certainly there’s no effort spent to determine whether it’s a net drain on our time and on the quality of the result. Which everyone on our teams can see is the case. But we are pressured to continue using it anyway.

    • MajorasMaskForever@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      12 hours ago

      I’d argue the two aren’t as different as you make them out to be. Both types of projects want a functional codebase, both have limited developer resources (communities need volunteers, business have a budget limit), and both can benefit greatly from the development process being sped up. Many development practices that are industry standard today started in the open source world (style guides and version control strategy to name two heavy hitters) and there’s been some bleed through from the other direction as well (tool juggernauts like Atlassian having new open source alternatives made directly in response)

      No project is immune to bad code, there’s even a lot of bad code out there that was believed to be good at the time, it mostly worked, in retrospect we learn how bad it is, but no one wanted to fix it.

      The end goals and proposes are for sure different between community passion projects and corporate financial driven projects. But the way you get there is more or less the same, and that’s the crux of the articles argument: Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?

        Because, as noted by another replier, open source wants working code and closed source just want code that runs.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      When did you last time decide to buy a car that barely drives?

      And another thing, there are some tech companies that operate very short-term, like typical social media start-ups of which about 95% go bust within two years. But a lot of computing is very long term with code bases that are developed over many years.

      The world only needs so many shopping list apps - and there exist enough of them that writing one is not profitable.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    16 hours ago

    Mostly closed source, because open source rarely accepts them as they are often just slop. Just assuming stuff here, I have no data.

    • unalivejoy@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      15 hours ago

      And when they contribute to existing projects, their code quality is so bad, they get banned from creating more PRs.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      6
      ·
      14 hours ago

      Creator of curl just made a rant about users submitting AI slop vulnerability reports. It has gotten so bad they will reject any report they deem AI slop.

      So there’s some data.

    • luciole (he/him)@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      I think it’s established genAI can spit straightforward toy examples of a few hundred lines. Bungalows aren’t simply big birdhouses though.