• GrymEdm@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 months ago

    This is Kyle Hill’s video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      I didn’t get past the part where he started talking about the dark forest theory as if it “solved” the Fermi paradox. The Fermi paradox is an observation, the dark forest theory is a theory. Worse, actually, it’s considered a hypothesis. I was willing to sit down for the 15 min video. Why blow your credibility in the first sentences.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        Unfortunately the Dark Forest thing is super popular right now, so it gets the clicks.

        Which is rather annoying, IMO, because as Fermi Paradox solutions go it’s riddled with holes and implausibilities. But it’s scary, and so people latch on to it easily.

    • AtmaJnana@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      I generate AI content (some of which is art) for fun, so I am not against it in theory. I just dont so far find much enjoyment consuming AI content made by others. So far the vast majority of it is mediocre. Which seems like a natural consequence of lowering the barriers to entry.

      The Sora demo, for example, is very compelling technologically, but it didn’t impress me at all as something that would replace creative work, so much as provide a tool to get it done differently.

      As AI content becomes more prevalent, I will continue to further disengage with that content and prefer authentic human experiences, to the extent that AI content continues to feel mostly soulless and vacuous.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      10 months ago

      Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?

      I wouldn’t mind it as much if these chatbots weren’t being used for nefarious purposes, like mass data collection, tracking, influencing, and privacy violations. Other than that, if it walks like a human, talks like a human, and we are convinced it’s a human, is there anything wrong with that? It might as well be human. This is going to become more and more of a big question as we get closer to AGI. An AGI isn’t going to suddenly “wake up” and become self aware one day. All these systems are slowly inching towards it. There’s not going to be a clean line between “just a program mimicking a human” and “a fully self-aware entity”. It’s up to us to determine that, and there’s no hard rules to determine that, because it falls into the “problem of other minds” philosophical concept.