• TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    edit-2
    4 months ago

    “Can AI do [Blank]” is getting pretty old. They will literally fill in that blank with anything they can come up with and it’s getting kinda silly.

    Here’s a list of potential new AI articles I predict coming out within the next year:

    • “Can AI teach us more about the dinosaurs?”
    • “How AI will solve the climate crisis.”
    • “New AI technology let’s you speak with deceased loved ones with staggering accuracy.”
    • “How AI can help you save money.”
    • “New AI model lets us translate dead languages.”
    • “Soon all your friends will be AI.”
    • “AI can help you lose weight.”
    • “How we can use AI to find aliens.”

    I’m sure at least one of these articles already exists. Literally all they are trying to do is make money with half baked ideas or steal your personal data.

    • Cyrus Draegur@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      ·
      4 months ago

      a fun little rule of thumb that I like to apply is that whenever an article’s headline is a question you may safely presume the answer is usually no.

    • Communist@lemmy.frozeninferno.xyz
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 months ago
      • “Can AI teach us more about the dinosaurs?”

      done already

      • “How AI will solve the climate crisis.”

      pretty sure I’ve seen this

      • “New AI technology let’s you speak with deceased loved ones with staggering accuracy.”

      done

      • “How AI can help you save money.”

      done

      • “New AI model lets us translate dead languages.”

      done

      • “Soon all your friends will be AI.”

      pretty sure japan already has this problem

      • “AI can help you lose weight.”

      i’d be shocked if not already done

      • “How we can use AI to find aliens.”

      pretty sure I’ve seen this

      i think these are predictions of last week

      • zurohki@aussie.zone
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 months ago

        The great thing about using AI to search for aliens is that it’ll find them, whether there’s any out there or not.

      • uranibaba@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 months ago
        “Soon all your friends will be AI.”
        

        pretty sure japan already has this problem

        I remember reading not to long ago about people having AI as girlfriends.

      • RememberTheApollo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        We already know how to solve the climate crisis. We just don’t want to because it would cost too much, inconvenience us, and really upset the shareholders.

        The only reason to ask AI would be like asking the butler to take out the trash, we just can’t be bothered to do even that much work and want to hit the “easy” button.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Sounds about right. And I’m sure at least half of them are just click bait while the others are wishful thinking. Or just sad…

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    26
    ·
    4 months ago

    that diffusion of responsibility is a thing that already happened with crypto too

    no officer, it’s not a ponzi because it’s a Distributed Future of Finance™, go pound sand, do you hate progress?

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      4 months ago

      I remember seeing some crytpo bro smugly explain that his obviously illegal business model was fine because “It’s a DAO, I’m just a community member.”

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    21
    ·
    4 months ago

    Based on your record of shitposting, our AI model predicts that your final wish is that your entire estate be left to … Marc Andreessen? Is that correct? If so, blink as if in surprise.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    4 months ago

    Can’t wait for the profit above care tier hospitals to have their own AI that allows patients in a vegetative state to “freely” tell those same hospitals that they need to remain alive on whatever system is keeping them alive for as long as possible, making sure their family incurs the max amount of debt/bills possible. I’d think most middle aged or older family members would absolutely believe the AI is actually connected to their brain and is telling them it’s what they want since they seem to be a lot more gullible about anything AI generated being real, if fakebook is to be believed.

  • Angry_Autist (he/him)@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 months ago

    All jokes aside I had an AIDungeon saved story that was built around the emails and texts my mom sent before she passed, I used it as a kind of therapy.

    I knew I was talking to an AI but it was interesting to see what the engine did with it. Was pretty accurate too. Granted this was before the Dumb Dragon fiasco so you probably couldn’t make that quality of output anymore.

    edit: to the absolute human shitstain that downvoted this, have the balls to reply why.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      edit: to the absolute human shitstain that downvoted this

      ugh. don’t do that :<

      have the balls to reply why.

      this isn’t highschool, no-one owes you homework

      (a bit of advice: instead treat that as a reason to reflect as to why. you might learn something)

        • flere-imsaho@awful.systems
          link
          fedilink
          English
          arrow-up
          19
          ·
          4 months ago

          my condolences.

          and, no matter how much this has helped you to cope with your personal trauma, going through that trauma does not entitle you to use emotional blackmail to silence people who do not subscribe to the llm hype.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            15
            ·
            4 months ago

            couldn’t have said it better myself, but for posterity, they of course sent a report:

            Reason: Sanctimonious bullshit, unsolicited advice and multi instance harassment

            I initially had some sympathy for their position, but the downvote whining and report abuse really has done them no favors. it’s time for them to find a different instance to pollute.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              4 months ago

              I specifically did not touch on that aspect because, like, yay accidental benefit? but extreme emphasis on the “accidental”, and it’s not debate club and and and… but then they had to go throw a tantrum

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      I’m completely sympathetic to you here despite the edit. sorry that happened and I’m glad you found some way to help in dealing with it

  • CarbonIceDragon@pawb.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 months ago

    I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.

    • pyrex@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      4 months ago

      I do not recommend using the word “AI” as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don’t distinguish between things that could actually be built and “throwing an LLM at the problem” – you’re treating their lack-of-differentiation as valid and feeding them hype.

      • CarbonIceDragon@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I use a term I’ve seen used before, I’m not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      As an autistic who struggles with communication and organizing thoughts, LLMs have been helping me process emotions and articulating things. Not perfectly in the way that you’d describe (hence i mostly don’t use LLM outputs themselves as replies), but my situation is much better than pre-November 2022

      • Robert Kingett backup@tweesecake.social
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        It is a shame LLM’s weren’t designed to be a common good to Disabled people though. We’re just a happy use case accident for these companies and AI manufacturers. It’s tricky because this could be done just as well, I figure, with specifically designed LLM’s instead of generic ones. @pavnilschanda @CarbonIceDragon

        • pavnilschanda@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          4 months ago

          There are some efforts for LLM use for disabled people, such as GoblinTools. And you’re very right about disabled people benefitting from LLMs being a happy use case accident. With that being the reality, it’s frustrating how so many people who blindfully defend AI use disabled people as a shield against ethical concerns. Tech companies themselves like to use us to make themselves look good; see the “disability dongle” concept as a prime example.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        4 months ago

        this remark demonstrates a stunning lack of any understanding of anything at all of any of the topics involved in this, amazing