Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.

So what have you got?

  • Flamangoman@leminal.space
    link
    fedilink
    arrow-up
    21
    ·
    vor 15 Stunden

    Not exactly AI being used, rather developed, but Meta’s torrenting 80tb of books and not seeding is egregious

    • haverholm@kbin.earth
      link
      fedilink
      arrow-up
      3
      ·
      vor 8 Stunden

      The fact that so much training data is scraped without consent makes a lot of the popular LLMs unethical already in their development, yeah. And that in turn makes using the models unethical.

  • quickhatch@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    vor 15 Stunden

    I’m a university prof in a medical science field. We hired a new, tenure-line prof to teach introductory musculoskeletal anatomy to prepare our students for the more rigorous, full systems anatomy that’s taught by a different professor. We learned (too late, after a year) that they used AI to generate the slides they used in lecture and never questioned/evaluated the content. Had an entire cohort of students fail the subsequent anatomy course after that.

    But in my mind, what’s worse is that the administration did nothing to correct the prof, and continues to push a pro-AI narrative in order for us to spend less time investing resources in teaching.

    • Greg Clarke@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      vor 10 Stunden

      Given the environmental costs, the social costs, and the fraud it entails, using it at all is pretty much unethical.

      There are loads of examples of AI being used in socially positive ways. AI doesn’t just mean ChatGPT.

    • ArcRay@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      5
      ·
      vor 16 Stunden

      Excelled point. I think there are some legitimate uses of AI. Especially in image processing for science related topics.

      But for the most part, almost every common use is unethical. Whether it be the energy demands, (and its contributions to climate change), the theft of intellectual property, the spread of misinformation, and so much more. Overall, it’s a huge net negative on society.

      I remember hearing about the lawyer one. IIRC chatGPT was citing laws that didn’t even exist. How do you not check what it wrote? You wouldn’t blindly accept predictive word typing with your phones keyboard and autocorrect. So why would you blindly trust a fancier autocorrect?

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        vor 10 Stunden

        But for the most part, almost every common use is unethical.

        The most common uses of AI are not in the headlines. Your email spam filter is AI.

          • Greg Clarke@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            vor 2 Stunden

            You should be accurate with your language if you’re going to claim a whole industry is unethical. And it’s also important to make a distinction between the technology and the implementation of the technology. LLMs can be trained and used in ethical ways

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              vor 2 Stunden

              I’m not really sure if I want to agree here. We’re currently in the middle of some hype wave concerning LLMs. So most people mean that when talking about “AI”. Of course that’s wrong. I tend to use the term “machine learning” if I don’t want to confuse people with a spoiled term.

              And I must say, most (not all) machine learning is done in a problematic way. Tesla cars have been banned from companies parking lots, your Alexa saves your private conversations in the cloud, the algorithms that power the web weigh down on society and they spy on me. The successfull companies build upon copyright-theft or personal data from their users. And all of that isn’t really transparent to anyone. And oftentimes it’s opt-out if we get a choice at all. But of course there are legitimate interests. I believe a dishwasher or spamfilter would be trained ethically. Probably also the image detection for medical applications.

              • Greg Clarke@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                vor 2 Minuten

                I 100% agree that big tech is using AI in very unethical ways. And this isn’t even new, the chairman of the U.N. Independent International Fact-Finding Mission on Myanmar stated that Facebook played a “determining role” in the Rohingya genocide. And then recently Zuck actually rolled back the programs that were meant to prevent this in the future.

  • carl_dungeon@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    vor 15 Stunden

    Mass consumption of copyright works for training, but still considering individuals doing it to be criminals.

    • phanto@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      vor 16 Stunden

      I’m a month away from my IT diploma. Even the teachers are feeding us AI slop at this point.

      They gave up trying to get the students to stop at the end of first year. Protip: don’t hire a new IT grad, they don’t know anything chatGPT doesn’t know.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        vor 16 Stunden

        I interviewed a candidate recently, and they basically lost all consideration when I asked them a basic sysadmin question and they replied, “That’s kind of one of those basic commands I just ask ChatGPT.”

        The basic sysadmin question was: “Name one way on a Linux server to check the free disk space”.

        Sadly, I had to continue the interview, but I didn’t even bother writing down any of the candidate’s responses after that. The equivalent would have been asking them “what’s 2+2?” and they break out a calculator. Instant fail.

    • ArcRay@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      4
      ·
      vor 16 Stunden

      It felt like the right way to approach the topic. AI has become so pervasive, I’m not even sure I could search for it without simultaneously using AI.