• kandoh@reddthat.com
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    Is this any bleaker than forming a parasocial relationship with someone you see on your screen?

  • markovs_gun@lemmy.world
    link
    fedilink
    arrow-up
    47
    ·
    3 days ago

    I can’t wait until ChatGPT starts inserting ads into its responses. “Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It’s a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald’s French fries!”

  • qarbone@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    The only people that think this will help are people that don’t know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    110
    ·
    3 days ago

    A human therapist might not or is less likely to share any personal details about your conversations with anyone.

    An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.

    • DaddleDew@lemmy.world
      link
      fedilink
      arrow-up
      61
      ·
      edit-2
      3 days ago

      Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.

      They are also pushing for the idea of an AI “social circle” for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.

      To that we add the fact that we now know they’ve been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter’s algorithm to promote their political views.

      Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.

      Forget Neuralink. Musk already has a direct connection into the brains of many people.

      • fullsquare@awful.systems
        link
        fedilink
        arrow-up
        16
        ·
        3 days ago

        PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be

    • WR5@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I’m not advocating for it, but it could be just locally run and therefore unable to share anything?

    • desktop_user
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      the AI therapist probably can’t force you into a psych ward though, a human psychologist is obligated to (under the right conditions).

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Who says that’s not coming in the next paid service based on this great idea for chatbots to provide therapy to the abused masses.

        • desktop_user
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          nobody, but local will continue to be an option (unless the government fucks up the laws)

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Yeah we have spiritual delusions at home already!

      Seriously, no new spiritual delusions could ever be more harmful than what we have right now.

      • DeceasedPassenger@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        3 days ago

        Totally fair point but I really don’t know if that’s true. Most mainstream delusions have the side effect of creating community and bringing people together, other negative aspects notwithstanding. The delusions referenced in the article are more akin to acute psychosis, as the individual becomes isolated, nobody to share delusions with but the chatbot.

        With traditional mainstream delusions, there also exists a relatively clear path out, with corresponding communities. ExJW, ExChristian, etc. People are able to help others escape that particular in-group when they’re familiar with how it works. But how do you deprogram someone when they’ve been programmed with gibberish? It’s like reverse engineering a black box. This is scaring me as I write it.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          This isn’t a new thing, people have gone off alone into this kind of nonsensical journey for a while now

          The time cube guy comes to mind

          There’s also temple OS written in holy C, he was close to some of the stuff in the article

          And these are just two people functional and loud enough to be heard. This is a thing that happens, maybe LLMs exacerbate a pre existing condition, but people have been going off the deep end like this long before LLMs came into the picture

          • DeceasedPassenger@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            3 days ago

            Your point is not only valid but also significant, and I feel stands in addition, not contradiction, to my point. These people now have something to continuously bounce ideas off; a conversational partner that never says no. A perpetual yes-man. The models are heavily biased towards the positive simply by nature of what they are, predicting what comes next. You (may or may not) know how in improv acting there’s a saying called “yes, and” which serves to keep things always moving forward. These models effectively exist in this state, in perpetuity.

            Previously, people who have ideas such as these will experience near-universal rejection from those around them (if they don’t have charisma in which case they start a cult) which results in a (relatively, imo) small number of extreme cases. I fear the presence of such a perpetual yes-man will only accelerate all kinds of damage that can emerge from nonsensical thinking.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              I agree, it’s certainly not going to help people losing touch. But that’s not what worries me - that’s a small slice of the population, and models are beginning to get better at rejection/assertion

              What I’m more worried about is the people who are using it almost codependently to make decisions. It’s always there, it’ll always give you advice. Usually it’s somewhat decent advice, even. And it’s a normal thing to talk through decisions with anyone

              The problem is people are offloading their thinking to AI. It’s always there, it’s always patient with you… You can literally have it make every life decision for you.

              It’s not emotional connection or malicious AI I worry about… You can now walk around with a magic eight ball that can guide you through life reasonably well, and people are starting to trust it above their own judgement

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          You mean the guys who put kids in suicide bombs don’t have acute psychosis?

          What about almost of the rvaibg Christian hermits that sit in their basements and harass people online?

          Its full on lovecraftian level psychosis. In the US they sell out stadiums and pretend to heal people by touch lmao

  • C1pher@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    You must know what you’re doing and most people don’t. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and “make beliefs” that aren’t based in any science or empirical data.

  • Cyberflunk@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    3 days ago

    I’ve tried this ai therapist thing, and it’s awful. It’s ok to help you work out what you’re thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.

    My $.02 anyway.

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    If therapy is meant to pacify the masses and make us just accept life as it is then sure I guess this could work.
    But hey, we love to also sell to people first that they are broken, make sure they feel bad about it and tell them they can buy their 5 minutes of happiness with food tokens.
    So, I’m sure capitalists are creaming their pants at this idea. BetterHelp with their “licensed” Bob the crystal healer from Idaho, eat your heart out.

    P.S. You just know this is gonna be able to prescribe medications for that extra revenue kick.

  • Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    37
    ·
    3 days ago

    I suppose this can be mitigated by installing a local LLM that doesn’t phone home. But there’s still a risk of getting downright bad advice since so many LLM’s just tell their users they’re always right or twist the facts to fit that view.

    I’ve been guilty of this as well, I’ve used ChatGPT as a “therapist” before. It actually gives decently helpful advice, compared to what’s out there available after a google search. But I’m fully aware of the risks “down the road”, so to speak.

    • TrueStoryBob@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      so many LLM’s just tell their users they’re always right

      This is the problem, they apparently cannot be objective as just a matter of course.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    3 days ago

    People’s lack of awareness of how important accessibility is really shows in this thread.

    Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.