• GardenVarietyAnxiety@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    ·
    edit-2
    2 months ago

    This is being done by PEOPLE. PEOPLE are using AI to do this.

    I’m not defending AI, but we need to focus on the operator, not the tool.

    The operator as much as the tool.

        • TexasDrunk@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Step one for gun control should be a fully functioning mental healthcare system. That’s not the final step by any means, but if people are getting the mental help they need there will be fewer shootings.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 months ago

            Step one for gun control should be gun control.

            Sure, a functioning mental healthcare system is important and should be pursued in parallel. But, clearly, there’s a major issue with the availability of powerful guns. That needs to be addressed before, or at least at the same time as mental health.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      2 months ago

      Technology is not neutral.

      Especially for a tool that’s specifically marketed for people to delegate decision-making to it, we need to seriously question the person-tool separation.

      That alleged separation is what lets gig economy apps abuse their workers in ways no flesh-and-blood boss would get away with, as well as RealPage’s decentralized price-fixing cartel, and any number of instances of “math-washing” justifying discrimination.

      The entire big tech ethos is basically to do horrible shit in such tiny increments that there is no single instance to meaningfully prosecute. (Edit: As always, Mike Judge is relevant: https://youtu.be/yZjCQ3T5yXo)

      We need to take this seriously. Language is perhaps the single most important invention of our species, and we’re at risk of the social equivalent of Kessler Syndrome. And for what? So we can write “thank you” notes quicker?

    • zib@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      2 months ago

      You bring up a good point. In addition to regulating the tool, we should also punish the people who maliciously abuse it.

      • GardenVarietyAnxiety@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 months ago

        Regulate it because it’s being abused, and hold the abusers accountable, yeah.

        I always see the names of the models being boogey-manned, but we only ever see the names of the people behind the big, seemingly untouchable ones.

        “Look at this scary model” vs “Look at this person being a dick”

        We’re being told what to be afraid of and not who is responsible for it, because fear sells and we can’t do anything with it.

        Just my perception, of course.

    • Saleh@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I mean the tool is also being made by people. And there is people who pointed out, that a tool that is great at spurting out plausible sounding things with no factual bearing could be abused badly for spreading misinformation. Now there have been ethic boards among the people who make these tools who have taken these concerns in and raised them in their companies, subsequently getting ousted for putting ethical concerns before short term profits.

      The question is, how much is it just a tool and how much of it is intrinsically linked with the unethical greedy people behind pushing it onto the world?

      E.g. a cybertruck is also just a car, and one could say the truck itself is not to blame. But it is the very embodiment of the problems of the people involved.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        subsequently getting ousted for putting ethical concerns before short term profits.

        The irony is that there are no profits. The companies selling generative AI are losing such vast sums of money it’s difficult to wrap your head around.

        What they’re focused on isn’t short-term profits, it’s being the biggest, most dominant firm whenever AI does eventually become profitable, which might take decades.

    • otter@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      People seem to’ve already forgotten about Transmetropolitan. 🤷🏽‍♂️

      I mean, sure, fuck Ellis, but still. Idiocracy came after, and even that’s fading from modern awareness, it seems. 😶‍🌫️

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    50
    ·
    2 months ago

    Leaving the information age and entering the disinformation age.

    • Franklin@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago

      A deadly weapon given how much the ruling class is trying to turn a class war into an identity war.

  • Tobberone@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 months ago

    AI content, AI bots in the forums, AI telemarketing, AI answering machines, AI everything. AI will make IRL and stuff like audited national encyclopedias important again. Gone is the promise of the internet. And this is the real reason why anonymity will not be possible online. If we can’t identify the poster as a human, it will mean nothing…

  • peopleproblems@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    The good news is think of all the possibilities in regards to funding new research to prove AI wrong!

    Or think of the millions the rest of the world will have to spend on software engineers fixing their fucked up AI generated code!

    It’s like outsourcing to an even worse firm!

  • Pacattack57@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    Does anyone else hate when words are cut with hyphens, especially longer words? Just truncate the same word. Makes it easier to read.

  • darthelmet@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Question: Does Google Scholar only list published papers from reputable journals or does it just grab anything people throw out there? We have already seen that some journals will publish complete nonsense without looking at it. AI or not, there’s a core problem with how academic work gets peer reviewed and published at the moment.

  • Sotuanduso@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    What app is this that justifies the text with hyphens? Is it in a fixed-width display, or does it detect syllables automatically?