• techclothes@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      24 days ago

      I don’t know the guy, but I do know the site, which is really nice for seeing how other games implemented UI. Makes complete sense for AI assholes to want all that data.

    • BigPotato@lemmy.world
      link
      fedilink
      arrow-up
      26
      ·
      24 days ago

      Yeah, I’ve been a pirate for so long I have zero moral grounds to be against using copyrighted stuff for free…

      Except I’m not burning a small nations’ worth of energy to download a NoFX album and I’m not recreating that album and selling it to people when they ask for a copy of Heavy Petting Zoo (I’m just giving them the real songs). So, moral high ground regained?

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    arrow-up
    77
    ·
    24 days ago

    Then there’s always that one guy who’s like “what about memes?”

    MSpaint memes are waaaaay funnier than Ai memes, if only due to being a little bit ass.

  • Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    73
    ·
    edit-2
    23 days ago

    Some many in these comments are like “what about the ethical source data ones?”

    Which ones? Name one.

    None of the big ones are. Wtf is ethically sourced? E.g. Ebay wants to collect data for ai shit. My mom has an account, and she could opt out of them using her data but when I told her about it, she told me that she didn’t understand. And she moved on. She just didn’t understand what the fuck they are doing and why she might should care. But I guess it is “ethically” sourced as they kinda asked by making it opt out, I guess.

    That surely is very ethical and you can not critic it for it… As we all know, an 50yo adult fucking a 14yo would also be totally cool as long as the 14yo doesn’t say no. Right? That is how our moral compass work. /S

    Fucking disgusting. All of you tech bro complain about people not getting ai or tech in general and then talk about ethically sourced data. I spit on you.

    I love IT, I work in it and I live it, but I have morals and you could too

    Edit: after a bunch of messages telling me that I am wrong. I wonder when they will realize that they are making my point. I am saying that it isn’t ethically sourced without consent and uninformed consent isn’t consent. And they are tell me, an it professional with an interest in how machine learning functions ever since alphago and 7 years before the ai hype, that I don’t understand it. If I don’t understand it, what makes you believe the general public understands and can consent to it. If I am wrong about ai, I am wrong about ai but I am not wrong about the unethical nature of that data, people don’t understand it.

      • cybersin@lemm.ee
        link
        fedilink
        arrow-up
        12
        ·
        24 days ago

        Yeah, except royalties in music are almost always a joke. Those artists are going to make much less off their AI voice than if they actually appeared in studio and the end product is going to be worse. If AI cost the same or more, there would be no market for it. Relevant story about Hollywood actors who sold AI likenesses.

        Even if it was actually “ethically trained”, the end result is still horrible.

        Also, paying to have an AI Snoop Dogg in your song is the lamest shit I’ve ever heard.

          • cybersin@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            22 days ago

            Someone saying whatever heinous shit they want using your voice seems a bit unethical, as does getting paid pennies for it.

            • If you’ve sold them your voice under the condition they can do whatever they like with it, I don’t see it being unethical. You walked into it informed (presumably) and accepted the “pennies” (presumably). It may be stupid. What comes out may be shit. But it’s not “unethical”.

              If they stole your voice, or if you had content limits that they breached, or if they’re paying you less than you agreed for, then yes, it’s unethical.

      • coolkicks@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        24 days ago

        That AI was trained on absolute mountains of data that wasn’t ethically gained, though.

        Just because an emerald ring is assembled by a local jeweler doesn’t mean the diamond didn’t come from slave labor in South Africa.

        • ArchRecord@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          ·
          24 days ago

          Voice Swap was not trained on any data that wasn’t “ethically gained.”

          Read the bottom of their FAQ that lists the exact databases in question.

          The couple of datasets they used on top of all the data they directly pay artists to consensually provide have permissive licenses that only require attribution for use, and gathered their information directly from a group of willing, consenting participants.

          They are quite literally the exception to the rule of companies claiming they’re ethical, then using non-ethically sourced data as a base for their models.

    • suy@programming.dev
      link
      fedilink
      arrow-up
      12
      ·
      24 days ago

      Which ones? Name one.

      What’s wrong with what Pleias or AllenAI are doing? Those are using only data on the public domain or suitably licensed, and are not burning tons of watts on the process. They release everything as open source. For real. Public everything. Not the shit that Meta is doing, or the weights-only DeepSeek.

      It’s incredible seeing this shit over and over, specially in a place like Lemmy, where the people are supposed to be thinking outside the box, and being used to stuff which is less mainstream, like Linux, or, well, the fucking fediverse.

      Imagine people saying “yeah, fuck operating systems and software” because their only experience has been Microsoft Windows. Yes, those companies/NGOs are not making the rounds on the news much, but they exist, the same way that Linux existed 20 years ago, and it was our daily driver.

      Do I hate OpenAI? Heck, yeah, of course I do. And the other big companies that are doing horrible things with AI. But I don’t hate all in AI because I happen to not be an ignorant that sees only the 99% of it.

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        24 days ago

        AllenAi has datasets based on

        GitHub, reddit, Wikipedia and “web pages”.

        I wouldn’t call any of them ethically sourced.

        “Webpages” as it is vague as fuck and makes me question if they requested consent of the creators.

        “Gutenberg project” is the funniest tho.

        Writing GitHub, reddit and Wikipedia, tells be very clearly that they didn’t. They might asked the providers but that is not the creator. Whether or not the provider have a license for the data is irrelevant on a moral ground unless it was an opt-in for the creator. Also it has to be clearly communicated. Giving consent is not “not saying no”, it is a yes. Uninformed consent is not consent.

        When someone post on Reddit in 2005 and forgot their password, they can’t delete their content from it. They didn’t post it with the knowledge that it will be used for ai training. They didn’t consent to it.

        Gutenberg project… Dead author didn’t consent to their work being used to destroy a profession that they clearly loved.

        So I bothered to check out 1 dataset of the names that you dropped and it was unethical. I don’t understand why people don’t get it.

        What is wrong? That you think that they are ethical when the first dataset that I look at, already isn’t.

        • merari42@lemmy.world
          link
          fedilink
          arrow-up
          9
          ·
          24 days ago

          We generally had the reasonable rule that property ends at dead. Intellectual property extending beyond the grave is corporatist 21st century bullshit. In the past all writing got quickly into the public domain like it should. Depending on country within in at least 25 years of the publishing date to the authors dead. Project Gutenberg reflects the law and reasonable practice to allow writing to go into the public domain.

          • Tartas1995@discuss.tchncs.de
            link
            fedilink
            arrow-up
            3
            ·
            24 days ago

            Good focus on 1 point, sadly bad point to focus on.

            What is lawful and legal, is not what is moral.

            The Holocaust was legal.

            Try again. Let’s start. Should the invention of ai have an influence on how we treat data? Is there a difference between reproducing a work after the author’s death and using possible millennia of public domain data to destroy the economical validity of a profession? If there is, should public domain law consider that? Has the general public discuss these points and come to a consensus? Has that consensus been put in law?

            No? Sounds like the law is not up to date to the tech. So not only is legal not Moral, legal isn’t up to date.

            You understand the point of public domain, right? You understand that even if you were right (you aren’t), that it would resolve the other issues, right?

            • KeenFlame@feddit.nu
              link
              fedilink
              arrow-up
              1
              ·
              23 days ago

              Yes. We should never have been idiotic with patents and other forms of gatekeeping information. Information is always free and all forms of controlling it is folly

              • Tartas1995@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                23 days ago

                Then don’t gatekeep e.g. your naked body and your loved one’s secrets! Information should always be fee and all forms of controlling it is folly! Do it. While you are at it, your, and your family’s, full name and place of employment please. Thanks!

                Oh wait, you don’t want to do that right? Some information is private. You have some rights on some information. Ok then let’s talk about it.

                • KeenFlame@feddit.nu
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  23 days ago

                  Not what we are talking about. But you know that. Do you want to explain how to police public information without it being folly?

        • suy@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          23 days ago

          I don’t know where you got that image from. AllenAI has many models, and the ones I’m looking at are not using those datasets at all.

          Anyway, your comments are quite telling.

          First, you pasted an image without alternative text, which it’s harmful for accessibility (a topic in which this kind of models can help, BTW, and it’s one of the obvious no-brainer uses in which they help society).

          Second, you think that you need consent for using works in the public domain. You are presenting the most dystopic view of copyright that I can think of.

          Even with copyright in full force, there is fair use. I don’t need your consent to feed your comment into a text to speech model, an automated translator, a spam classifier, or one of the many models that exist and that serve a legitimate purpose. The very image that you posted has very likely been fed into a classifier to discard that it’s CSAM.

          And third, the fact that you think that a simple deep learning model can do so much is, ironically, something that you share with the AI bros that think the shit that OpenAI is cooking will do so much. It won’t. The legitimate uses of this stuff, so far, are relevant, but quite less impactful than what you claimed. The “all you need is scale” people are scammers, and deserve all the hate and regulation, but you can’t get past those and see that the good stuff exists, and doesn’t get the press it deserves.

          • Tartas1995@discuss.tchncs.de
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            23 days ago

            https://allenai.org/dolma then you scroll down to “read dolma paper” and then click on it. This sends you to this site. https://www.semanticscholar.org/paper/Dolma%3A-an-Open-Corpus-of-Three-Trillion-Tokens-for-Soldaini-Kinney/ad1bb59e3e18a0dd8503c3961d6074f162baf710

            1. Funny how you speak about e.g. text to speech ai when I am talking about LLM and image generation AIs. It is almost as if you didn’t want to critic my point.
            2. It is funny how you use legal terms like copyright when I talk about morality. It is almost as if I don’t say that you shouldn’t be legally allowed to work with public domain Material but that you shouldn’t call it ethical when it is not. It is also funny how you say it is fair use. I invite you to turn the whole of Harry Potter from text to Speech and publish it. It is fair use, isn’t it? You know that you wouldn’t be in the right there. But again, this isn’t a legal argument, it is moral one.
            3. Who said, that I think it could replace writers or painters in quality or skill, I said it could ruin the economical validity of the profession. That is a very very different claim.

            I want to address your statement about my telling behavior. Sorry, you are right. I am sorry for the screen reader crowd. You all probably know that alt text could be misleading and that someone says that in the internet, isn’t a reliable source. So i hope you can forgive me as you did your own simple research into AllenAi anyway.

      • SloganLessons@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        24 days ago

        It’s incredible seeing this shit over and over, specially in a place like Lemmy, where the people are supposed to be thinking outside the box, and being used to stuff which is less mainstream, like Linux, or, well, the fucking fediverse.

        Lemmy is just an opensource reddit, with all the pros and cons

        • wellheh@lemmy.sdf.org
          link
          fedilink
          arrow-up
          4
          ·
          24 days ago

          It’s such a strange take, too. Like why do we have to include AI in our box if we fucking hate it?

    • Taleya@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      24 days ago

      What the fuck data collected could ebay use to train AI? The fact people buy star trek figurines??

      • TheOakTree@lemm.ee
        link
        fedilink
        arrow-up
        12
        ·
        24 days ago

        You could train it to analyze sales tactics for different categories of items or even for specific items, then offer the AI’s conclusions as an ‘AI assistant’ locked behind a paywall.

        Plenty of use cases for collecting e-commerce data.

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        5
        ·
        24 days ago

        Thanks for making my point. People don’t understand and therefore can’t consent and therefore it isn’t ethically sourced data.

        • Taleya@aussie.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          23 days ago

          Mate, supercilious comments like this also do not help. They make you look a raging boy crying wolf.

          “Ah yes someone expressed incredility at the viability of the business practice in this instance. I must tell them they are the problem.”

          I mean you had a chance to point out the issues in depth handed to you on a fuckin’ plate but instead you chose to jam your head up your own butt.

    • JennyLaFae
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 days ago

      One ethical AI usage I’ve heard was a few artists who take an untrained bot and train it on only their own artwork

      • BoulevardBlvd
        link
        fedilink
        arrow-up
        3
        ·
        23 days ago

        What’s an “untrained bot”? Did they code it from scratch themselves? I find it almost impossible to believe it wasn’t just a fork of an existing, unethical project but I’d love more detail

        • JennyLaFae
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 days ago

          I remember they said they bought it and explained how they used it to increase their own productivity and how being trained on other artwork was a detriment because it wouldn’t generate in their style. Probably was a fork of an unethical project lol

    • But I guess it is “ethically” sourced as they kinda asked by making it opt out, I guess.

      No.

      As your mother’s case shows, making it “opt out” is emphatically not the ethical choice. It is the grifter’s choice because it comes invariably paired with difficult-to-find settings and explanations that sound like they come from a law book as dictated by someone simultaneously drunk and tripping balls.

      The only ethical option is “opt in”. This means people give informed consent (or if they don’t bother to read and just click OK at least they get consented hard like they deserve). This means you have to persuade that the choice is good for them and not just for the service provider.

      TL;DR: Opt-in is the way you do things without icky “I don’t understand consent” vibes.

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        23 days ago

        Did you read the whole comment? You understand that I was sarcastic and I followed it by be hinting at the idea that “she didn’t say no” is not considered consent in e.g. sexual encounters, raising the question why would it be here?

        So we agree. You just misunderstood my comment.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    42
    ·
    edit-2
    24 days ago

    And this is where I split with Lemmy.

    There’s a very fragile, fleeting war between shitty, tech bro hyped (but bankrolled) corporate AI and locally runnable, openly licensed, practical tool models without nearly as much funding. Guess which one doesn’t care about breaking the law because everything is proprietary?

    The “I don’t care how ethical you claim to be, fuck off” attitude is going to get us stuck with the former. It’s the same argument as Lemmy vs Reddit, compared to a “fuck anything like reddit, just stop using it” attitude.


    What if it was just some modder trying a niche model/finetune to restore an old game, for free?

    That’s a rhetorical question, as I’ve been there: A few years ago, I used ESRGAN finetunes to help restore a game and (seperately) a TV series. Used some open databases for data. Community loved it. I suggested an update in that same community (who apparently had no idea their beloved “remaster” involved oldschool “AI”), and got banned for the mere suggestion.


    So yeah, I understand AI hate, oh do I. Keep shitting on Altman an AI bros. But anyone (like this guy) who wants to bury open weights AI: you are digging your own graves.

    • forrgott@lemm.ee
      link
      fedilink
      arrow-up
      56
      ·
      24 days ago

      Oh, so you deserve to use other people’s data for free, but Musk doesn’t? Fuck off with that one, buddy.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        24 days ago

        Musk does too, if its openly licensed.

        Big difference is:

        • X’s data crawlers don’t give a shit because all their work is closed source. And they have lawyers to just smash anyone that complains.

        • X intends to resell and make money off others’ work. My intent is free, transformative work I don’t make a penny off of, which is legally protected.

        That’s another thing that worries me. All this is heading in a direction that will outlaw stuff like fanfics, game mods, fan art, anything “transformative” of an original work and used noncommercially, as pretty much any digital tool can be classified as “AI” in court.

    • haverholm@kbin.earth
      link
      fedilink
      arrow-up
      19
      ·
      24 days ago

      What if it was just some modder trying a niche model/finetune to restore an old game, for free?

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        24 days ago

        Yeah? Well what if they got very similar results with traditional image processing filters? Still unethical?

        • superniceperson@sh.itjust.works
          link
          fedilink
          arrow-up
          24
          ·
          24 days ago

          The effect isn’t the important part.

          If I smash a thousand orphan skulls against a house and wet it, it’ll have the same effect as a decent limewash. But people might have a problem with the sourcing of the orphan skulls.

          It doesn’t matter if you’we just a wittle guwy that collects the dust from the big corporate orphan skull crusher and just add a few skulls of your own, or you are the big corporate skull crusher. Both are bad people despite producing the same result as a painter that sources normal limewash made out of limestone.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            24 days ago

            Even if all involved data is explicity public domain?

            What if it’s not public data at all? Like artifical collections of pixels used to train some early upscaling models?

            That’s what I was getting: some upscaling models are really old, used in standard production tools under the hood, and completely legally licensed. Where do you draw the line between ‘bad’ and ‘good’ AI?

            Also I don’t get the analogy. I’m contributing nothing to big, enshittified models by doing hobbyist work, if anything it poisons them by making public data “inbred” if they want to crawl whatever gets posted.

              • brucethemoose@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                24 days ago

                I’m trying to make the distinction between local models and corporate AI.

                I think what people really hate is enshittification. They hate the shitty capitalism of unethical, inefficient, crappy, hype and buzzword-laden AI that’s shoved down everyone’s throats. They hate how giant companies are stealing from everyone with no repercussions to prop up their toxic systems, and I do too. It doesn’t have to be that way, but it will be if the “fuck AI” attitude like the one on that website is the prevalent one.

    • Norah (pup/it/she)
      link
      fedilink
      English
      arrow-up
      20
      ·
      24 days ago

      It’s the same with c/Linux and folks marching in and defending Windows to the death. Some people just like to be contrarian ¯\_(ツ)_/¯

    • haverholm@kbin.earth
      link
      fedilink
      arrow-up
      15
      ·
      24 days ago

      Honestly, I didn’t intend to block a dozen AI Bros today, but this has been like shooting fish in a barrel.

  • gamer@lemm.ee
    link
    fedilink
    arrow-up
    27
    ·
    24 days ago

    Damn, I had no idea the Game UI DB guy was so based. Huge respect from me.

  • AnimalsDream@slrpnk.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    24 days ago

    The more I see dishonest, blindly reactionary rhetoric from anti-AI people - especially when that rhetoric is identical to classic RIAA brainrot - the more I warm up to (some) AI.

    • uienia@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      24 days ago

      It is in fact the opposite of reactionary to not uncritically embrace your energy guzzling, disinformation spreading, proft-driven “AI”.

      As much as you don’t care about language, it actually means something and you should take some time to look inwards, as you will notice who is the reactionary in this scenario.

      • AnimalsDream@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        24 days ago

        “Disinformation spreading” is irrelevant in this discussion. LLM’s are a whole separate conversation. This is about image generators. And on that topic, you position the tech as “energy guzzling” when that’s not necessarily always the case, as people often show; and profit-driven, except what about the cases where it’s being used to contribute to the free information commons?

        And lastly you’re leaving out the ableism in being blindly anti-AI. People with artistic skills are still at an advantage over people who either lack them, are too poor to hire skilled artists, and/or are literally disabled whether physically or cognitively. The fact is AI is allowing more people than ever to bring their ideas into reality, where otherwise they would have never been able to.

        • mke@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          24 days ago

          Listen, if you want to argue for facilitating image creation for people who aren’t skilled artists, I—and many more people—are willing to listen. But this change cannot be built on top of the exploitation of worldwide artists. That’s beyond disrespectful, it’s outright cruel.

          I could talk about the other points you’re making, but if you were to remember one single thing from this conversation, please let it be this: supporting the AI trend as it is right now is hurting people. Talk to artists, to writers, even many programmers.

          We can still build the tech ethically when the bubble pops, when we all get a moment to breathe, and talk about how to do it right, without Sam Altman and his million greedy investors trying to drive the zeitgeist for the benefit of their stocks, at the cost of real people.

          • AnimalsDream@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            24 days ago

            There are literally image generation tools that are open-source. Even Krita has a plugin available. There are multiple datasets that can be trained on, other than the one that flagrantly infringed everyone’s copyrights, and there is no shortage of instructions online for how people can put together their own datasets for training. All of this can be run offline.

            So no, anyone has everything they need to do this right, right now if they want to. Getting hysterical about it, and dishonestly claiming that it all “steals” from artists helps no one.

    • mke@programming.dev
      link
      fedilink
      arrow-up
      14
      ·
      24 days ago

      Yes, I like the unethical thing… but it’s the fault of people who are against it. You see, I thought they were annoying, and that justifies anything the other side does, really.

      In my new podcast, I explain how I used this same approach to reimagine my stance on LGBT rights. You see, a person with the trans flag was mean to me on twitter, so I voted for—

      • AnimalsDream@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        24 days ago

        Wow, using a marginalized group who are actively being persecuted as your mouthpiece, in a way that doesn’t make sense as an analogy. Attacking LGBTQI+ rights is unethical, period. Where your analogy falls apart is in categorically rejecting a broad suite of technologies as “unethical” even as plenty of people show plenty of examples of when that’s not the case. It’s like when people point to studies showing that sugar can sometimes be harmful and then saying, “See! Carbs are all bad!”

        So thank you for exemplifying exactly the kind of dishonesty I’m talking about.

        • mke@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          24 days ago

          My comment is too short to fit the required nuance, but my point is clear, and it’s not that absurd false dichotomy. You said you’re warming up to some AI because of how some people criticize it. That shouldn’t be how a reasonable person decides whether something is OK or not. I just provided an example of how that doesn’t work.

          If you want to talk about marginalized groups, I’m open to discussing how GenAI promotion and usage is massively harming creative workers worldwide—the work of which is often already considered lesser than that of their STEM peers—many of whom are part of that very marginalized group you’re defending.

          Obviously not all AI, nor all GenAI, are bad. That said, current trends for GenAI are harmful, and if you promote them as they are, without accountability, or needlessly attack people trying to resist them and protect the victims, you’re not making things better.

          I know that broken arguments of people who don’t understand all the details of the tech can get tiring. But at this stage, I’ll take someone who doesn’t entirely understand how AI works but wants to help protect people over someone who only cares about technology marching onwards, the people it’s hurting be dammed.

          Hurt, desperate people lash out, sometimes wrongly. I think a more respectable attitude here would be helping steer their efforts, rather than diminishing them and attacking their integrity because you don’t like how they talk.

        • BoulevardBlvd
          link
          fedilink
          arrow-up
          3
          ·
          23 days ago

          Nah, you do so in the second comment here, like a properly lady. I can’t believe his lack of decorum. Is your sensitive soul ok?

      • AnimalsDream@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        24 days ago

        Astroturfing? That implies I’m getting paid or compensated in any way, which I’m not. Does your commenting have anything to do with anything?

  • Asswardbackaddict@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    24 days ago

    As an artist, all y’all need to chill. The problem is capitalism, and it’s not like artists make a living anyway. Democratizing art opens up a lot of possibilities, you technophobes.

      • Asswardbackaddict@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        24 days ago

        Easy. Don’t work a job or pay rent. Anarchism already exists. It just exists in the crannies (like right in front of you) where other domineering primates don’t beat you with sticks or boss you around. You don’t fix the system. You ignore it.

        • petrol_sniff_king
          link
          fedilink
          arrow-up
          8
          ·
          24 days ago

          You don’t fix the system. You ignore it.

          I’ll tell that to the IRS the next time I refuse to pay taxes.

          • Asswardbackaddict@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            23 days ago

            On the surface. But, you’re being rhetorical, in the original sense of the word. You can link words together but fail to do logic. You’re doing duckspeak, quack quack

      • untakenusername@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        23 days ago

        think more. if i draw something that looks nice on paper, and at the same time am fine with asking chatgpt to solve a math problem, why would my views on ai affect me being an artist or not?

        • the_q@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 days ago

          Because ChatGPT is trained on stolen data and using it for any reason is participating in that theft while simultaneously causing a significant impact to the environment.

          So I guess your right; it has no bearing on whether you’re an artist or not, but whether you’re a decent person. Thanks for clearing that up.

          • untakenusername@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            23 days ago

            Now what if you used an ai trained on uncopyrighted, public data, and made sure that the computers training it were using solar power or some environmentally friendly energy source?

                • the_q@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  22 days ago

                  It has the potential to be useful, yes. It isn’t currently and the path we’re on with it is already irredeemable.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      19
      ·
      24 days ago

      It’s a person who runs a database of game UI’s being contacted by people who want to train AI models on all of the data en masse.

  • Scubus@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    24 days ago

    Tools have always been used to replace humans. Is anyone using a calculator a shitty person? What about storing my milk in the fridge instead of getting it from the milk man?

    I don’t have an issue with the argument, but unless they’re claiming that any tool which replaced human jobs were unethical then their argument is not self consistent and thus lacks any merit.

    Edit: notice how no one has tried to argue against this

    People have begun discussing it, although i suppose it was an unfair expectation to have this discussion here. Regardless, after i originally edited this, you guys did have tons of discussions with me. I do appreciate it, and it seems that most of us support the same things. It kinda just seems like an issue with framing and looking at things in the now vs the mid term future.

    • petrol_sniff_king
      link
      fedilink
      arrow-up
      44
      ·
      edit-2
      24 days ago

      Yes, I also think the kitchen knife and the atom bomb are flatly equivalent. Consistency, people!

      Edit: 🤓 erm, notice how no one has tried to argue against this

        • petrol_sniff_king
          link
          fedilink
          arrow-up
          15
          ·
          edit-2
          24 days ago

          I can’t believe I never thought about calculators. You and I really are the brothers Dunce, aren’t we?

          • ober@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            17
            ·
            24 days ago

            I like how he made an edit to say no one is arguing his point and the only response he got is arguing his point and then he replies to that with no argument.

            • petrol_sniff_king
              link
              fedilink
              arrow-up
              12
              ·
              24 days ago

              They virtually always do this. People are, very often, not actually motivated by logic and reason; logic and reason are a costume they don to appear more authoritative.

            • Scubus@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              24 days ago

              He made a completely irrelevant observation. There was no argument. He didnt try to refute anything I said, he tried to belittle the argument. No response was neccassary. If anyone else has reaponded, i havent had a chance to look.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        24 days ago

        “If you facilitate AI art, you are a shitty person”

        There are ethical means to build models using consentually gathered data. He says those artists are shitty.

        • AFK BRB Chocolate@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          24 days ago

          Now you’re moving the goalpost. You said told always replace humans and made the analogy to calculators and refrigerators. The fact is that the cast majority of generative AI in use today didn’t get their content ethically.

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            23 days ago

            Lmao im moving the goalpost? That was literally my argument from the get go. Dude says your shitty for using/facilitating ai art. Dudes dead wrong. End of story

            • rolling@lemmy.world
              link
              fedilink
              arrow-up
              8
              ·
              24 days ago

              I think the fact that AI sucks ass at even the most basic math proves that the difference between discovery and creation is, indeed, not arbitrary.

              Unless you are the kind of person to use AI to do math, then yeah I can see how it can look that way.

              • BrainInABox@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                24 days ago

                I think the fact that AI sucks ass at even the most basic math proves that the difference between discovery and creation is, indeed, not arbitrary.

                I don’t follow your reasoning at all.

    • redwattlebird@lemmings.world
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      24 days ago

      The issue isn’t automation itself. The issue is the theft, the fact that art cannot be automated and the use of it to further enshittification.

      First, the models are based off theft of OUR data and then sold back to us for profit.

      Secondly, most AI art is utter crap and doesn’t contribute anything to human society. It’s shallow slop.

      Thirdly, having it literally everywhere while also being completely energy inefficient is absolutely dumb. Why are we building nuclear reactors and coal plants to replace what humans can do for cheap??

      Edit: further, the sole purpose of AI is to hoard wealth to a small number of people. Calculators, hammers etc. do not have this function and do not also require lots of energy to use.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        24 days ago

        Ive responded to a lot of that elsewhere, but in short: i agree theft bad. Capitalism also bad. Neither of those are inherit to ai or llms though, although theft is definitely the easy way.Art can be automated, nature does it all the time. We cant do it to a high degree now, i will concede.

        Quality is of course low, its new. The progress in the last year has been astounding, it will continue to improve. Soon this will no longer be a valid argument.

        I agree, modern ai is horribly innefficient. It’s a prototype, but its also a hardware issue. Soon there will be much more efficient designs and i suspect a rather significant alteration to the architecture of the network that may allow for massively improved efficiency. Disclaimer: i am not professionally in the field, but this topic in particular is right up mutiple fields of study i have been following for a while.

        Edit: somehow missed your edit when writing. To some extent every tool of productivity exists to exploit the worker. A calculator serves this function as much as anything else. By allowing you to perform calculations more quickly, your productivity massively increases in certain fields, sometimes in excess of thousands of times. Do you see thousands of times the profits of your job prior to the advent of calculators, excluding inflation? Unlikely. Or the equivelent pay of the same amount of “calculators” required for your work? Equally unlikely. Its inherit to capitalism.

        • prototype_g2@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          23 days ago

          Art can be automated

          Under what definition of art can that be possible? Is art to you nothing more than an image? Why automate art and not other tasks? What is the point of automating art? Why would you not want to make art yourself and instead delegate it to a machine?

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            23 days ago

            Art is whatever people put into it. If its a passion for you, theres nothing stopping you from making it. For me, art is mostly music designed to evoke an emotional response and game assets for hobby programming, not for commercial use. Im not profitting off either of those, but I don’t have the funds to pay someone to make custom assets nor do I have the talent.

            There has been art before that was not done by humans. There is a selfie of a monkey, paintings by elephants, semi-self generated art via fungus, etc. In those examples, the story is as much a piece of the art as the image itself. That subset of art cannot be replaced without simply lying.

            • petrol_sniff_king
              link
              fedilink
              arrow-up
              3
              ·
              23 days ago

              I keep trying to tear away, but you are so goddamn funny.

              Art can be automated, nature does it all the time.

              Okay.

              There is a selfie of a monkey,

              Right.

              And it was nature that did this.

              Nature took a picture of itself as a monkey.

              Nature, in its monkey form, started a wild rube goldberg machine, including a monkey and the monkey’s finger, to automate the picture taking process: a picture of nature.

              • Scubus@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                23 days ago

                Tree falls in forest. Thats art.

                Nature created humans. Humans create art.

                Take a paintbrush, hook it up to a treadmill. Thats art.

                So many easy ways to automate art, its almost like you’re poorly flailing to get a gotcha thats never going to happen. I was trying to be civil and not simply treat you like a dumbass, but seeing as you’re intentionally trying to be one i no longer see the harm.

                And now youre going to start spouting esoteric crap like humans not being a part of nature. Tf are we then, some other dimensional being? You think you’ve got a soul and that makes you different from nature, but all that really means is that you’re bad at analysing things from perspectives other than your own.

                Edit: lmfao nevermind, reading clearly isnt your strong suit. From the get go you’ve been spouting ireelevant bullshit like a child, then when i ignore your tantrum you double down. You’re clearly not here for a discussion, so hopefully you won’t be too terribly surprised when (surprise surprise) here in 20 years everything i said turns out to be true. To those that actually follow the tech and have for decades, none of what is currently happenning is surprising and weve been trying to warn people for years. But you always end up with people like you, too dumb to listen and too convinced of their own ego despite literally not making a single good point this entire conversation, hell even a single coherent thought. Have a great day 😄

            • prototype_g2@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              23 days ago

              nor do I have the talent

              And why do you think you do not have “talent”? What is that “talent” you speak of? Is it something people are born with? What is the problem with what you make, if all you care about is what people put into art?

              Art is whatever people put into it

              “It” what? The pronoun “it” is referring to what? Art? Without this clarification I cannot accurately make sense of anything else in your response.

              Keep in mind that, while defining a term, you cannot use that term in it’s own definition.

        • redwattlebird@lemmings.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          23 days ago

          Art can be automated, nature does it all the time.

          Ok, first off: what is your definition of automation? This is what I mean when I say automation.

          Nature does not automate art. Are you equating the process of, say, almost all bower birds make bowers, therefore that’s automation? Then you have a poor understanding of what automation, art, and therefore AI/LLM, is meant to achieve.

          With art, you need to think about the state of mind to create that piece in the first place. Before it was created, it doesn’t exist in any capacity. Why the art piece exists in the first place is the reason why AI cannot automate it because human emotions are very complex.

          If an AI/LLM can experience human emotions, we’ve essentially created another type of human. This is deeply profound and, with the technology and materials we have now (that is, the processing chips and hardware), it is simply not possible. We’re at the point where we’re making small, tiny leaps in gains.

          Which leads me to…

          It’s a prototype, but its also a hardware issue. Soon there will be much more efficient designs and i suspect a rather significant alteration to the architecture of the network that may allow for massively improved efficiency.

          It is not a software/coding issue that limits an LLM’s capability to emulate the human psyche. Again, it is not tweaks in code structure that will send us rocketing up the graph of progress. It is the limitation of the actual materials that we use and their maximum efficacy, hence why we need nuclear reactors and so on to power thousands of processors. We will never get to the point of replicating human ability and energy efficiency with the materials that exist in Earth. And, are we going to spend more energy and resources to look to the stars for a material that may or may not exist to create a machine that has the capability to think as a human?

          How long did humans take to evolve to the capacity we have? That took hundreds and thousands of years of trial and error. But I digress…

          Its inherit to capitalism.

          Absolutely agree. The whole purpose of this ‘AI boom’ is to make more money for the <1%, steal from us and hoard it for themselves. On this basis, I completely reject the use of LLMs. Fuck AI.

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            23 days ago

            we will never get to the point of replicating human ability and energy efficiency with the materials that exist in Earth

            That is flatly incorrect. There is a type of ai that is literally just replicating the human mind, hardware and all. That is well within our current technology, although the connections would not be the same as it would merely be a clone. But a cloned human is an artificial intelligence.

            I know that form of ai is not what you are referring to, but why not? What is it about ai that makes it impossible to replicate in a metallic substrate? And even assuming a metallic substrate is flatly impossible, that still doesnt stop progress. There are youtubers currently working of making an artifical rat brain in a jar play doom. This is not a piece of a living rat, these are rat neurons grown from stem cells that were converted from skin cells. So we could just as easily start progress down physical ai’s.

            As for evolution, that was millions of years of random chance, the difference between that and guided evolution is too great to even compare. And the materials came from Earth in the first place. The entire idea of ai is based around replicating what nature did in the first place, thats how all our technologies are made. People said it was impossible for people to fly merely a decade before the wright brothers. The only difference now is that there is no material scientist on earth that claims wed have to go to outer space to replicate the hardware neccassary for ai.

            Edit: forgot about the first part of your comment. this should largely cover that

            • redwattlebird@lemmings.world
              link
              fedilink
              arrow-up
              2
              ·
              23 days ago

              But a cloned human is an artificial intelligence.

              No, it’s not. Artificial intelligence is something that is artificially created, like a machine, that can think like a human. A human clone is human, literally.

              I think we’re both standing from extremely different points of view here on what AI, that is artificial intelligence actually is. But I concede that my statement about it being impossible to create is hyperbolic. We’re can’t say for certain that it’s impossible.

              I know that form of ai is not what you are referring to, but why not?

              … Because… It’s wrong.

              I wouldn’t call investing power and resources to replicate human capability progress. It’s literally going backwards and rebuilding from scratch. Is this line of research honestly worth pursuing at the cost of our climate and environment? It’s the same with the Wright Brothers; their technology paved way for increased consumption of resources and rate of spread of disease.

              Yeah we get brand new shiny things but at what cost? Is it worth it in the long run? Is it worth automating human capability when we’ve messed up every single step of our planetary ecosystem?

              I would much rather live in a world where all the effort and resources that is currently put into ‘AI’ redirected into sustainable systems. That, to me, is progress that is worth pursuing.

    • SexDwarf@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      24 days ago

      Would you replace a loved-one (a child, spouse, parent etc.) with an artificial “tool”? Would it matter to you if they’re not real even when you couldn’t tell the difference? And if your answer is yes, you had no trouble replacing a loved-one with an artificial copy, then our views/morals are fundamentally so different that I can’t see us ever agreeing.

      It’s like trying to convince me that having sex with animals is awesome and great and they like it too, and I’m just no thanks, that’s gross and wrong, please never talk to me again. I know I don’t necessarily have the strongest logic in the AI (and especially “AI art”) discussion but that’s how I feel.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        24 days ago

        Thats a lot of different questions in a lot of different contexts. If my parent decided to upload their conciousness near the end of their life into a mech suit covered in latex(basically) that was indistinguishable physically from a human(or even not, who am I to judge) and the process of uploading a conciousness was well understood and practiced, then yes, I would respect their decision. If you wouldn’t, you either have difficulty placing yourself in hypothetical situations designed to test the limits of societal norms, or you abjectly do not care about the autonomy of your parent.

        Child, I have no issue adopting. If they happen to be an artificial human I don’t see why that should proclude them from being allowed to have parents.

        Spouse, I’m not going to create one to my liking. But if we lived in a world with AI creating other AI that are all sentient, some of which presumably choosing to take a physical form in an aforementioned mech, why shouldnt i date them? Your immediate response is sex, but lets ignore that. Is an asexual relationship with a sentient robot ok? What about a friendship with said robot? Are you even allowed to treat a sentient robot as a human? Whats the distinction? I’m not attempting a slippery slope, I genuinely would like to hear where your distinctions between what is and isn’t acceptable lies. Because I think this miscommunication either stems from a misunderstanding about the possible sentience of ai in the future, or from the lack of perspective of what it might be like from their perspective.

        Edit: just for the record, i dont downvote comments like yours, but someone did, so i had to upvote you.

        • petrol_sniff_king
          link
          fedilink
          arrow-up
          7
          ·
          24 days ago

          Are you even allowed to treat a sentient robot as a human?

          Oh, boy, this one’s really hard. I’ll give it my best shot, though. Phoo. Okay, here goes.

          Yes.

          Ohhhh fuck. Oh god. Oh please. Scubus, how did I do? Did I win?

          Now please argue to me that chatgpt is sentient.

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            24 days ago

            Ah, sorry. I misunderstood your argument. No, I would never replace a loved one with a “tool”. But replacing loved ones with tools was never something I was arguing for. Chatgpt is a very crude prototype for the type of AI I am referring to. And he didnt say chatgpt, he said “degenerative AI” but also stated “AI art”.

            The entire argument is centered around those who use or make ai art being “shitty people”, no exceptions. But that falls apart when you ever remotely analyze it. There are ethical ways to do the entire process.

            • petrol_sniff_king
              link
              fedilink
              arrow-up
              2
              ·
              24 days ago

              But replacing loved ones with tools was never something I was arguing for.

              You are arguing in favor of replacing people, flesh and blood, with machines.

              Manual labor? Sure. I love post-scarcity.

              Art? Culture? My mom? Obscene. Profane, even. Morally reprehensible. We’re holding you back from recess until you learn to appreciate your classmates.

              • Scubus@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                23 days ago

                Nah, that is a false equivalence. Replacing “people” with machines is very different from replacing “bonds” with machines. You are not literally killing the people and replacing them with a robot. It’s a job.

                You are conflating replacing the job with literally replacing the person. And personal bonds are not jobs, nor are hobbies. You are not going to have someone or a robot play golf for you. Nor would you replace your mom.

                Art and culture are two different things, they are not replacing bonds. But i think the disconnect there comes solely from the current state of ai. Once it improves to the point of being indistinguishable as all technologies do, i think those will be seen as much less problematic outside of the lens of capitalism.

                • petrol_sniff_king
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  23 days ago

                  You are not literally killing the people and replacing them with a robot.

                  … You think my position is that I think stable diffusion will kill people.

                  Like Body Snatchers?

                  To anyone still reading: This is ultimately why I didn’t go for the point-by-point essay post so many else did. How am I supposed to respond to this? Genuinely.

        • SexDwarf@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          24 days ago

          Thanks for the reply (and the upvote, although I’ve hidden all lemmy scores from my account so I really don’t care about voting for that matter).

          My thought experiment is a lot more complicated if the “AI tool” is sentient, i.e. it can be proven without a hint of a doubt that the robot is essentially no different from a human. If we ever get that far, it’s a whole another can of questions.

          What I tried to (perhaps unsuccessfully) argue is that, yes we have and are replacing humans with tools all the time, but there’s also a line (I think) most wouldn’t cross, like replacing a loved-one with a tool. In my original argument that tool would just be an imitation, not a sentient machine. Maybe even a perfect imitation, but nothing more than that - a machine that has learned how to behave, speak etc. I don’t think many of us would be happy with a replacement like that.

          For me it’s same with AI art. I can’t appreciate art made by AI because it’s just imitation made by a tool. It has no meaning, no “soul”.

    • rolling@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      24 days ago

      This may come as a shock to you, but nobody was working as a refrigerator. Refrigerators didn’t replace the milk man, the stores did. Which was fine at first since those stores were supposed to buy the milk from the milkman and just make it more readily accessible. Then human greed took over, the stores or big name brands started to fuck over the milk man, and conspired with other big name stores to artificially increase the price of bread while blaming covid and inflation, and now some, although few people are trying to buy it back from the milk man if they can afford / access it.

      Those tools that did replace humans, did not steal human work and effort, in order to train themselves. Those tools did not try to replace human creativity with some soulless steaming pile of slop.

      You see, I believe open source, ethically trained AI models can exist and they can accomplish some amazing things, especially when it comes to making things accessible to people with disabilities for an example. But Edd Coates’ is specifically talking about art, design and generative AI. So, maybe, don’t come to a community called “Fuck AI”, change the original argument and then expect people argue against you with a good will.

      • ArtificialHoldings@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        24 days ago

        The “milkman” is a delivery person who works for milk producers. The company that produces milk still exists, the role of the milkman was just made unnecessary due to advances in commercial refrigeration - milk did not have to be delivered fresh, it could be stored and then bought on-demand.

        https://en.wikipedia.org/wiki/Milk_delivery

        “Human greed” didn’t take over to fuck over the milkman, they just didn’t need a delivery person any more because milk could be stored on site safely between shipments.

        • rolling@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          24 days ago

          I would argue it wasn’t just the refrigeration, but also the suburbanization of living and the cost effectiveness of delivering the milk from the farm to the store, which (in theory) made milk cheaper. You would still need the milkmen if stores and supermarkets didn’t exist. In an alternative world where we didn’t invent commercial / household refrigerators, you could still buy milk from stores daily without the need of a milkmen, becaue ultra-pasteurization exists.

          I guess thats the problem with analogies and I don’t think either of us will get anywhere by further arguing about this one specific example.

          • ArtificialHoldings@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            24 days ago

            Sure. What I mean to say is that the milkman didn’t disappear as a result of corporate greed conspiring to artifically increase the price of bread or whatever. Like you said, suburbanization and the supermarkets just made it so milk delivery was no longer necessary. The alternative is to continue paying milk deliverers… because that’s what they’ve always done, regardless of the fact that people can just pick up milk with the rest of their groceries.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        24 days ago

        Tons of people do! I browse /all and dont want to block /fuck_ai because a ton of you do have great discussions with me. Im not brigading, i have never once saught out this community, but ive always tried to be respectful and i havent gotten banned. So I’d say all is well.

        As far as the crappy stuff, that really seems like just another extension of consumerism. Modern art has irked people for a while because some of it is absurdly simplistic. But if people are willing to buy into it, thats on them. Llms have very limited use case, and ethically sourcing your data is clinically neccassary for both ethical and legal reaosns. But the world needs to be prepared for the onset of the next generation of ai. Its not going to be sentient quite yet, but general intelligence isnt too far away. Soon one ai will be able to outperform humans on most daily tasks as well as some more spcified tasks. Llms seemingly took the world by surprise, but if youve been following the tech the progression has been somewhat obvious. And it is continuing to progress.

        Honestly, the biggest concern i have with modern ai outside of how its being implemented is that it is environmentally very bad, but im hoping that the increase in the ai bubble will lead to more specialised energy efficient designs. I don’t remember what the paper was but they were using ai to generatively design more efficient chips and it was showing promising results. On a couple of the designs they werent entirely sure how they functioned(they have several strong theories, but theyre not certain. Not trying to misrepresent this), but when they fucked with them they stopped behaving as predicted/expected(relative to them being fucked with, of course a broken circuit isnt going to function correctly)

        • rolling@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          24 days ago

          Sorry, I made the comment about being on Fuck AI because of your edit to the original message. I wasn’t trying to accuse you of anything.

          Back to the AI stuff. I am sorry if I am a little sceptical about your claims about the “next generation of AI” and how “soon” they will outperform humans when even after all these years, money and energy poured into them, they still manage to fuck up a simple division question. Good luck making any model that needs to be trained on data perfect at this point, because AI slop that has been already generated and released in to the internet has already took care of that. Maybe we will have AGI at some point, but I will believe that when I actually see it.

          Finally, I don’t know about modern art being absurdly simplistic. How can you look at modern animation or music and call it absurdly simplistic. How can you look at thousands of game UI designs in Edd Coates’ website and call them absurdly simplistic? All AI will ever create when it comes to art is some soulless amalgamation of what it has seen before, it will kill all creativity, originalty and personality from art, but businessman in suits will gladly let it take over human artists because it is cheaper then hiring human artists and designers.

          • Scubus@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            23 days ago

            Yeah, i definitely get that. I suspect there will soon be techniques for sanitizing training data, although that just makes unethical capture more easy. And assuming the final goal is sentience, im not entirely sure it is unethical to train of others peoples data as long as you control for overfitting. The reasoning being that humans do the exact same thing. We train on every piece of media weve ever seen and use that to inspire “new” forms of media. Humans dont tend to have original thoughts, we just reshasg what weve heard. So every time you see a piece of media, you quite literally steal it mentally. It’s clearly a different argument with modern AI, I’m not claiming it does the same thing. But its main issue when it comes to that seems to be overfitting, too much of it’s inspiration can be directly seen. Sometimes it comes off as simply copying an image that was in its training data. Thats not inspiration, thats plagirism.

            And yeah i tend to assume were going to kill off capitalism because if we dont this discussion isnt going to matter anyways

  • drkt@scribe.disroot.org
    link
    fedilink
    arrow-up
    10
    ·
    24 days ago

    Oh boy here we go downvotes again

    regardless o the model you’re using, the tech itself was developed and fine-tuned on stolen artwork with the sole purpose of replacing the artists who made it

    that’s not how that works. You can train a model on licensed or open data and they didn’t make it to spite you even if a large group of grifters are but those aren’t the ones developing it

    If you’re going to hate something at least base it on reality and try to avoid being so black-and-white about it.

    • pretzelz@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      24 days ago

      I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.

      I’m not sure I agree with that argument. It’s like saying that if you invented a cure for cancer that relied on morally bankrupt means you shouldn’t use that cure. I’d say that there should be a legal process involved against the person who did the illegal acts but once you have discovered something it stands on its own two feet. Perhaps there should be some kind of reparations however given to the people who were abused in that process.

      • drkt@scribe.disroot.org
        link
        fedilink
        arrow-up
        3
        ·
        24 days ago

        I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.

        It’s not true; you can just train a model from the ground up on properly licensed or open data, you don’t have to inherit anything. What you’re talking about is called finetuning which is where you “re-train” a model to do something specific because it’s much cheaper than training from the ground up.

        • pretzelz@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          24 days ago

          I don’t think that’s what they are saying. It’s not that you can’t now, it’s that initially people did need to use a lot of data. Then they found tricks to improve training on less, but these tricks came about after people saw what was possible. Since they initially needed such data, their argument goes, and we wouldn’t have been able to improve upon the techniques if we didn’t know that huge neutral nets trained by lots of data were effective, then subsequent models are tainted by the original sin of requiring all this data.

          As I said above, I don’t think that subsequent models are necessarily tainted, but I find it hard to argue with the fact that the original models did use data they shouldn’t have and that without it we wouldn’t be where we are today. Which seems unfair to the uncompensated humans who produced the data set.

          • drkt@scribe.disroot.org
            link
            fedilink
            arrow-up
            2
            ·
            24 days ago

            I actually think it’s very interesting how nobody in this community seems to know or understand how these models work, or even vaguely follow the open source development of them. The first models didn’t have this problem, it was when OpenAI realized there was money to be made that they started scraping the internet and training illegally and consequently a billion other startups did the same because that’s how silicon valley operates.

            This is not an issue of AI being bad, it’s an issue of capitalist incentive structures.

            • BoulevardBlvd
              link
              fedilink
              arrow-up
              1
              ·
              23 days ago

              Cool! What’s the effective difference for my life that your insistence on nuance has brought? What’s the difference between a world where no one should have ai because the entirety of the tech is tainted with abuse and a world where no one should have ai because the entirety of the publicly available tech is tainted with abuse? What should I, a consumer, do? Don’t say 1000 hrs of research on every fucking jpg, you know that’s not the true answer just from a logistical standpoint

    • sixty@sh.itjust.works
      link
      fedilink
      arrow-up
      17
      ·
      24 days ago

      You CAN train a model on licensed or open data. But we all know they didn’t keep it to just that.

          • drkt@scribe.disroot.org
            link
            fedilink
            arrow-up
            2
            ·
            24 days ago

            No, they’re using a corporate model that was trained unethically. I don’t see what your point is, though. That’s not inherent to how LLMs or other AIs work, that’s just corporations being leeches. In other words, business as usual in capitalist society.

            • mke@programming.dev
              link
              fedilink
              arrow-up
              3
              ·
              24 days ago

              You’re right about it not being inherent to the tech, and I sincerely apologize if I insist too much despite that. This will be my last reply to you. I hope I gave you something constructive to think about rather than just noise.

              The issue, and my point, is that you’re defending a technicality that doesn’t matter in real world usage. Nearly no one uses non-corporate, ethical AI. Most organizations working with it aren’t starting from scratch because it’s disadvantageous or outright unfeasible resourcewise. Instead, they use pre-existing corporate models.

              Edd may not be technically right, but he is practically right. The people he’s referring to are extremely unlikely to be using or creating completely ethical datasets/AI.

              • drkt@scribe.disroot.org
                link
                fedilink
                arrow-up
                1
                ·
                23 days ago

                The issue, and my point, is that you’re defending a technicality that doesn’t matter in real world usage.

                You’re right and I need to stop doing it. That’s a good reminder to go and enjoy the fresh spring air 😄

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      ·
      24 days ago

      Name one that is “ethically” sourced.

      And “open data” is a funny thing to say. Why is it open? Could it be open because people who made it didn’t expect it to be abused for ai? When a pornstar posted a nude picture online in 2010, do you think they thought of the idea that someone will use it to create deepfakes of random women? Please be honest. And yes, a picture might not actually be “open data” but it highlights the flaw in your reasoning. People don’t think about what could be done to their stuff in the future as much as they should but they certainly can’t predict the future.

      Now ask yourself that same question with any profession. Please be honest and tell us, is that “open data” not just another way to abuse the good intentions of others?

        • mke@programming.dev
          link
          fedilink
          arrow-up
          12
          ·
          24 days ago

          Wow, nevermind, this is way worse than your other comment. Victim blaming and equating the law to morality, name a more popular duo with AI bros.

          • drkt@scribe.disroot.org
            link
            fedilink
            arrow-up
            2
            ·
            24 days ago

            I can’t make you understand more than you’re willing to understand. Works in the public domain are forfeited for eternity, you don’t get to come back in 10 years and go ‘well actually I take it back’. That’s not how licensing works. That’s not victim blaming, that’s telling you not to license your nudes in such a manner that people can use them freely.

            • mke@programming.dev
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              24 days ago

              The vast majority of people don’t think in legal terms, and it’s always possible for something to be both legal and immoral. See: slavery, the actions of the third reich, killing or bankrupting people by denying them health insurance… and so on.

              There are teenagers, even children, who posted works which have been absorbed into AI training without their awareness or consent. Are literal children to blame for not understanding laws that companies would later abuse when they just wanted to share and participate in a community?

              And AI companies aren’t using merely licensed material, they’re using everything they can get their hands on. If they’re pirating you bet your ass they’ll use your nudes if they find them, public domain or not. Revenge porn posted by an ex? Straight into the data.

              So your argument is:

              • It’s legal

              But:

              • What’s legal isn’t necessarily right
              • You’re blaming children before companies
              • AI makers actually use illegal methods, too

              It’s closer to victim blaming than you think.

              The law isn’t a reliable compass for what is or isn’t right. When the law is wrong, it should be changed. IP law is infamously broken in how it advantages and gets (ab)used by companies. For a few popular examples: see how youtube mediates between companies and creators, nintendo suing everyone they can (costs victims more than it does nintendo), everything disney did to IP legislation.

              • drkt@scribe.disroot.org
                link
                fedilink
                arrow-up
                1
                ·
                23 days ago

                Okay but I wasn’t arguing morality or about children posting nudes of themselves. I’m just telling you that works submitted into the public domain can’t be retracted and there are models trained on exclusively open data, which a lot of AI haters don’t know, understand or won’t acknowledge. That’s all I’m saying. AI is not bad, corporations make it bad.

                The law isn’t a reliable compass for what is or isn’t right.

                Fuck yea it ain’t, I’m the biggest copyright and IP law hater on this platform and I’ll get ahead of the next 10 replies by saying no it’s not because I want to enable mindless corporate content scraping; it’s because human creativity shouldn’t not be boxed in. It should be shared freely, lest our culture be lost.

  • blinx615@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    24 days ago

    Rejecting the inevitable is dumb. You don’t have to like it but don’t let that hold you back on ethical grounds. Acknowledge, inform, prepare.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      50
      ·
      24 days ago

      You probably create AI slop and present it proudly to people.

      AI should replace dumb monotonous shit, not creative arts.

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        24 days ago

        I couldn’t care less about AI art. I use AI in my work every day in dev. The coworkers who are not embracing it are falling behind.

        Edit: I keep my AI use and discoveries private, nobody needs to know how long (or little) it took me.

          • ArtificialHoldings@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            24 days ago

            The objections to AI image gens, training sets containing stolen data, etc. all apply to LLMs that provide coding help. AI web crawlers search through git repositories compiling massive training sets of code, to train LLMs.

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            24 days ago

            Just because I don’t have a personal interest in AI art doesn’t mean I can’t have opinions.

              • blinx615@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                23 days ago

                It’s all the same… Not sure why you’d have differing opinions between AI for code and AI for art, but please lmk, I’m curious.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  23 days ago

                  Code and art are just different things.

                  Art is meant to be an expression of the self and a form of communication. It’s therapeutic, it’s liberating, it’s healthy and good. We make art to make and keep us human. Chatbot art doesn’t help us, and in fact it makes us worse - less human. You’re depriving yourself of enrichment when you use a chatbot for art.

                  Code is mechanical and functional, not really artistic. I suppose you can make artistic code, but coders aren’t doing that (maybe they should, maybe code should become art, but for now it isn’t and I think that’s a different conversation). They’re just using tools to perform a task. It was always soulless, so nothing is lost.

        • Tartas1995@discuss.tchncs.de
          link
          fedilink
          arrow-up
          11
          ·
          24 days ago

          “i am fine with stolen labor because it wasn’t mine. My coworkers are falling behind because they have ethics and don’t suck corporate cock but instead understand the value in humanity and life itself.”

        • msage@programming.dev
          link
          fedilink
          arrow-up
          8
          ·
          24 days ago

          Then most likely you will start falling behind… perhaps in two years, as it won’t be as noticable quickly, but there will be an effect in the long term.

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            24 days ago

            This is a myth pushed by the anti-ai crowd. I’m just as invested in my work as ever but I’m now far more efficient. In the professional world we have code reviews and unit tests to avoid mistakes, either from jr devs or hallucinating ai.

            “Vibe coding” (which most people here seem to think is the only way) professionally is moronic for anything other than a quick proof of concept. It just doesn’t work.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      16
      ·
      24 days ago

      Ai isn’t magic. It isn’t inevitable.

      Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn’t threaten the livelihood of millions of people after stealing their labor.

      Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.

        • uienia@lemmy.world
          link
          fedilink
          arrow-up
          10
          ·
          24 days ago

          That is such a disingeous argument. “Making murder illegal? People will just kill each other anyways, so why bother?”

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            24 days ago

            The concept that a snippet of code could be criminal is asinine. Hardly enforceable nevermind the 1st amendment issues.

          • ArtificialHoldings@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            24 days ago

            This isn’t even close to what I was arguing. Like any major technology, all economically competitive countries are investing in its development. There are simply too many important applications to count. It’s a form of arms race. So the only way a country may see fit to ban its use in certain applications is if there are international agreements.

    • RandomVideos@programming.dev
      link
      fedilink
      arrow-up
      12
      ·
      24 days ago

      You could say fascism is inevitable. Just look at the elections in Europe or the situation in the USA. Does that mean we cant complain about it? Does that mean we cant tell people fascism is bad?

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        24 days ago

        No, but you should definitely accept the reality, inform yourself, and prepare for what’s to come.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      24 days ago

      They said the same thing about cloning technology. Human clones all around by 2015, it’s inevitable. Nuclear power is the tech of the future, worldwide adoption is inevitable. You’d be surprised by how many things declared “inevitable” never came to pass.

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        24 days ago

        It’s already here dude. I’m using AI in my job (supplied by my employer) daily and it make me more efficient. You’re just grasping for straws to meet your preconceived ideas.

        • dustyData@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          23 days ago

          It’s already here dude.

          Every 3D Tvs fan said the same. VR enthusiasts for two decades as well. Almost nothing, and most certainly no tech is inevitable.

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            23 days ago

            The fact that you think these are even comparable shows how little you know about AI. This is the problem, your bias prevents you from keeping up to date in a field that’s moving fast af.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              23 days ago

              Sir, this is a Wendy’s. You personally attacking me doesn’t change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn’t mean it works for everyone, no tech is inevitable.

              • blinx615@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                23 days ago

                I’m also annoyed by how “in the face” it has been, but that’s just how marketing teams have used it as the hype train took off. I sure do hope it wanes, because I’m just as sick of the “ASI” psychos. It’s just a tool. A novel one, but a tool nonetheless.

                What do you mean “black box”? If you mean [INSERT CLOUD LLM PROVIDER HERE] then yes. So don’t feed sensitive data into it then. It shouldn’t be in your codebase anyway.

                Or run your own LLMs

                Or run a proxy to sanitize the data locally on its way to a cloud provider

                There are options, but it’s really cutting edge so I don’t blame most orgs for not having the appetite. The industry and surrounding markets need to mature still, but it’s starting.

                Models are getting smaller and more intelligent, capable of running on consumer CPUs in some cases. They aren’t genius chat bots the marketing dept wants to sell you. It won’t mop your floors or take your kid to soccer practice, but applications can be built on top of them to produce impressive results. And we’re still so so early in this new tech. It exploded out of nowhere but the climb has been slow since then and AI companies are starting to shift to using the tool within new products instead of just dumping the tool into a chat.

                I’m not saying jump in with both feet, but don’t bury your head in the sand. So many people are very reactionary against AI without bothering to be curious. I’m not saying it’ll be existential, but it’s not going away, I’m going to make sure me and my family are prepared for it, which means keeping myself informed and keeping my skillset relevant.

                • dustyData@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  22 days ago

                  We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn’t produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn’t matter. Turns out AI is a hindrance and not an asset to us, thus is life.

    • thatKamGuy@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      23 days ago

      But if a tangent from the post, but you raise a valid point. Copying is not theft, I suppose piracy is a better term? Which on an individual level I am fully in support of, as are many authors, artists and content creators.

      However, I think the differentiation occurs when pirated works are then directly profited off of — at least, that’s where I personally draw the distinction. Others may has their own red lines.

      e.g. stealing a copy of a text book because you can’t otherwise afford is fine by me; but selling bootleg copies, or answer keys directly based off it wouldn’t be OK.

      • MJKee9@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        23 days ago

        If your entire work is derived from other people’s works then I’m not okay with that. There’s a difference between influence and just plain reproduction. I also think the way we look at piracy from the consumer side and stealing from the creative side should be different. Downloading a port of a sega dreamcast game is not the same as taking someone else’s work and slapping your name on it.