Previous posts: https://programming.dev/post/3974121 and https://programming.dev/post/3974080

Original survey link: https://forms.gle/7Bu3Tyi5fufmY8Vc8

Thanks for all the answers, here are the results for the survey in case you were wondering how you did!

Edit: People working in CS or a related field have a 9.59 avg score while the people that aren’t have a 9.61 avg.

People that have used AI image generators before got a 9.70 avg, while people that haven’t have a 9.39 avg score.

Edit 2: The data has slightly changed! Over 1,000 people have submitted results since posting this image, check the dataset to see live results. Be aware that many people saw the image and comments before submitting, so they’ve gotten spoiled on some results, which may be leading to a higher average recently: https://docs.google.com/spreadsheets/d/1MkuZG2MiGj-77PGkuCAM3Btb1_Lb4TFEx8tTZKiOoYI

  • innocentpixels@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 year ago

    I’m sure artists can use it as another tool, but the problem comes when companies think they can get away with just using ai. Also, the ai has been trained using artwork without any artist permission

    • bitsplease@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Yeah and I’m sure there are some artists out there making really novel work using AI as a tool, but a lot of amateur artists made the bulk of their money doing things that AI can just do for basically nothing now.

      If I want a character commission for my DnD character, I can get something really fucking excellent in an afternoon of playing around with Stable Diffusion, and that’s without any real expertise in AI tools or “prompt engineering”. Same with portraits of family, pets, friends, etc - and of course the smutty stuff that has always been the real money maker for low level amateur artists

      Those types of artists are already really suffering as a result of the tools available now, and it’s only going to get worse as these tools get easier and cheaper to use

      • blind3rdeye@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Agreed. And I want to go a bit further to talk about why else this might be bad.

        Some people believe that losing jobs to AI is fine, because it means society is more efficient; and that it gives people time to do other things. But I think there are a few major flaws in that argument. For a lot of people, their sense of purpose and sense of self, and their source of happiness comes from their art and their creativity. We can say “they can just do something else” but we’ve basically just making their lives worse. Instead of being paid and valued for making art; they can get paid for serving coffee or something… and perhaps not have as strong of a sense of purpose or happiness. Even if we somehow eliminate inequality, and give everyone huge amount of free-time instead of works, it’s still not clear that we’ve made it better. We just get people mindlessly scrolling on social media instead of creating something.

        That’s just one angle. Another angle is that by removing the kind of jobs that AI can do well, we remove the rungs on the ladder that people have been using to climb to other higher-level skills. An artist (or writer, or programmer, or whatever else), might start out by doing basic tasks that an AI can do easily; and then build their skills to later tackle more complex and difficult things. But if the AI just takes away all opportunities that are based on those basic tasks, then people then won’t have those opportunities to build their skills.

        So… if we put too much emphasis on speed & cost & convenience, we may accidentally find ourselves in a world where people are generally less happy, and less skilled, and struggle to find a sense of value or purpose. But on the plus side, it will be really easy to make a picture of a centaur girl or whatever.

        • Schadrach@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This is basically just a “we shouldn’t have cars because it would be bad for carriage and buggy whip makers, and they’d be less happy if they had to find other work” argument.

          The short version is that the people upset about this stuff have also benefitted immensely from lots of other jobs being automated away and thought they were immune until the first image generating models hit the scene. Now they fear the same thing that happened to a lot of manufacturing jobs, except it’s a problem now because white collar work and creatives are supposed to be immune.

          • blind3rdeye@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I don’t think it is as simple as that, but I certainly do see your point of view. And I’d probably agree if I didn’t feel like society is accelerating towards a very problematic future. (The problems I’m thinking of are not directly related to what we’re talking about here; but I just see this as part of what it might look like to start changing direction).

            I’d just advise that we think about what the end-goal is meant to look like. What are we hoping for here. What does it mean to have a good life. In many stories and visions of the future, people seem to envision utopia as people spending their time on artistic and creative pursuits; as in, that’s the thing we were meant to free out time for. So the automate that part away might be a mistake. We’re likely to just end up freeing time for something destructive instead.

    • seralth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The training data containing non licensed artwork is an extremely short term problem.

      Within even a few years that problem will literally be moot.

      Huge data sets are being made right now explicitly to get around this problem. And ai trained on other AI to the point that original sources no longer are impactful enough to matter.

      At a point the training data becomes so generic and intermixed that it’s indistinguishable from humans trained on other humans. At which point you no longer have any legal issues since if you deem it still unallowed at that point you have to ban art schools and art teachers functionally. Since ai learns the same way we do.

      The true proplem is just that the training data is too narrow and very clearly copies large chunks from existing artists instead of copying techniques and styles like a human does. Which also is solvable. :/