• @harsh3466@lemmy.ml
    link
    fedilink
    English
    404 months ago

    I’ve been opting out of the internet at large. It started a few years back just disconnecting from social media, but as AI has begun to pollute everything and enshittification ruins everything, I’m just kinda done with it all.

    I’ve been enjoying Lemmy, and I’m hoping Lemmy doesn’t turn into an AI circle jerk.

    As a result I’ve been building out my server with services I want to use that I control so I’m not trapped by enshittification or inundated with AI where I don’t want it.

    With all that being said, I know AI is here to stay. My biggest problem with AI is these companies gleefully gobbling up our data, our art, our words, our creativity, using it to feed and train, and to make billions, while we get nothing. It would be one thing if all these generative models were open source and freely available for everyone to use and benefit from, but that’s not the case. I know there are open source models, but the big ones are all paywalled and in many ways being weaponized against us.

    At some point I hope generative AI becomes a boon to society. Right now, I’m too cynical to believe it will. I feel like it’s just going to make things worse for the majority of people.

    • @jaemo@sh.itjust.works
      link
      fedilink
      English
      54 months ago

      Ollama is actually pretty decent at stuff now, and comparable in speed to chat gpt on a sort of busy day. I’m enjoying having a constant rubber duck to bounce ideas off.

      • @harsh3466@lemmy.ml
        link
        fedilink
        English
        44 months ago

        That’s cool. I haven’t looked at any local/foss llms or other generators, largely because I don’t have a use case for them.

        • FaceDeer
          link
          fedilink
          44 months ago

          If your concern is that we’re “not getting anything” in exchange for the training data AI trainers have gleaned from your postings, then those open-source AIs are what you should be taking a look at. IMO they’re well worth the trade.

          • @harsh3466@lemmy.ml
            link
            fedilink
            English
            24 months ago

            Agree. When I feel like playing and/or have a use case for myself I’ll be looking at open source ai.

          • @jaemo@sh.itjust.works
            link
            fedilink
            English
            24 months ago

            I’ve been playing with a locally installed instance of big agi really like the UI but it’s missing the RAG part. I’m also cobbling my own together for fun and not profit to try to stay relevant in these hard times. Langchain is some wild stuff.

  • Dave
    link
    fedilink
    English
    214 months ago

    I’m starting to think that we need to see AI research in the same way we see biological weapon research - a visit from a SEAL team or a cruise missile for any identified laboratory. Smash the disks, burn all the print outs!

    Okay, this is hyperbolic and unrealistic, but I agree with this lion-maned YouTuber - we are really not ready.

    AI as a tech is game changing, but it practically demands at least UBI (and probably some form of socialism) as a prerequisite. We, meanwhile, are still electing conservative governments! The same arseholes that will label the legions of unemployed artists, actors, musicians, coders, admin assistants etc etc as lazy and cut their benefits.

    Does anyone truly believe that a tech that can replace half of human jobs is going to create happy outcomes in today’s society? Or will it just make tech-bros and scammers richer, and virtually everyone else poorer?

    • Encrypt-Keeper
      link
      fedilink
      English
      164 months ago

      I haven’t clicked on the video but since you’ve said “Lion-maned YouTuber” I’m going to guess it’s Kyle Hill.

    • Peanut
      link
      fedilink
      English
      10
      edit-2
      4 months ago

      AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.

      The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.

      However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.

      Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.

      Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.

      a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.

  • @GrymEdm@lemmy.worldOP
    link
    fedilink
    English
    8
    edit-2
    4 months ago

    This is Kyle Hill’s video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?

    • @TheFriar@lemm.ee
      link
      fedilink
      English
      64 months ago

      I didn’t get past the part where he started talking about the dark forest theory as if it “solved” the Fermi paradox. The Fermi paradox is an observation, the dark forest theory is a theory. Worse, actually, it’s considered a hypothesis. I was willing to sit down for the 15 min video. Why blow your credibility in the first sentences.

      • FaceDeer
        link
        fedilink
        34 months ago

        Unfortunately the Dark Forest thing is super popular right now, so it gets the clicks.

        Which is rather annoying, IMO, because as Fermi Paradox solutions go it’s riddled with holes and implausibilities. But it’s scary, and so people latch on to it easily.

    • @AtmaJnana@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      4 months ago

      I generate AI content (some of which is art) for fun, so I am not against it in theory. I just dont so far find much enjoyment consuming AI content made by others. So far the vast majority of it is mediocre. Which seems like a natural consequence of lowering the barriers to entry.

      The Sora demo, for example, is very compelling technologically, but it didn’t impress me at all as something that would replace creative work, so much as provide a tool to get it done differently.

      As AI content becomes more prevalent, I will continue to further disengage with that content and prefer authentic human experiences, to the extent that AI content continues to feel mostly soulless and vacuous.

    • TimeSquirrel
      link
      fedilink
      2
      edit-2
      4 months ago

      Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?

      I wouldn’t mind it as much if these chatbots weren’t being used for nefarious purposes, like mass data collection, tracking, influencing, and privacy violations. Other than that, if it walks like a human, talks like a human, and we are convinced it’s a human, is there anything wrong with that? It might as well be human. This is going to become more and more of a big question as we get closer to AGI. An AGI isn’t going to suddenly “wake up” and become self aware one day. All these systems are slowly inching towards it. There’s not going to be a clean line between “just a program mimicking a human” and “a fully self-aware entity”. It’s up to us to determine that, and there’s no hard rules to determine that, because it falls into the “problem of other minds” philosophical concept.