Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, “1999 was described as being the peak of human civilization in ‘The Matrix’ and I laughed because that obviously wouldn’t age well and then the next 25 years happened and I realized that yeah maybe the machines had a point.”

  • hopesdead@startrek.website
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    17 hours ago

    You should read the duology (I’ve only read the first book) Monk and Robot, which is solopunk. The premise is that robots got tired of doing what they were built for, and decided to form a treaty with humans allowing them to wonder into the wild and live without human contact.

  • Glytch@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    15 hours ago

    American civilization? Yes,definitely. Human civilization? I genuinely don’t think so. I believe in us as a species and think the best is yet to come (after we rid ourselves of bigots and authoritarians).

    • MiDaBa@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      Most of human civilization has been run by kings, emperor’s and dictators. I see the worlds rich population gaining more control than ever while the possibilities for everyone else is less and less. The lower and lower middle class have become too easily influenced by fake news and propaganda. How do we advance when people can be manipulated to go against their own best interests?

  • toastmeister@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    21 hours ago

    The further you get from the gold standard the worse life you’ll have. Though you might have more social media and gadgets you’ll have a smaller house and worse quality food/services, as everything is financialized through debt in a futile attempt to force the elderly who own all the assets to consume every greater amounts, as automation progressively decreases the costs and companies find more advanced ways to shrinkflate products.

    • prole
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      1 day ago

      It’s even worse, man… At least the robots in The Matrix weren’t capitalists (I don’t think… I honestly forget most of the Animatrix)

      • Lucky_777@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        1 day ago

        They definitely weren’t capitalist, lmao. They only wanted to make the perfect system. You could consider their quest for perfection “greed”.

        They didn’t really have to try, though, they had a great system in place. Humans lived long enough for turnover and plenty of energy provided. That glitch was an issue, but contained. At least until someone decided to fall in love. Then the whole system failed.

        Probably the realist part of the matrix.

    • SkyeStarfall
      link
      fedilink
      arrow-up
      29
      ·
      edit-2
      1 day ago

      Interestingly enough though, the directors of the matrix are two trans women

      But while yes, queerphobia was worse in some ways, it was also not as bad in others. For example, trans people didn’t have the massive organized targeted attack back then. In many ways, things have gotten worse in this aspect too

      • explodicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        Question for the trans folk here: which time period was harder for you? Hostile ignorance or hostile attention?

        • WhiteRabbit_33@lemmy.world
          link
          fedilink
          arrow-up
          19
          ·
          24 hours ago

          During hostile ignorance:

          • I had to leave the state I grew up in to get into a place I could access medical care, get away from an unaccepting family, and get into a place I wasn’t afraid of being attacked while transitioning (being visibly trans till HRT kicked in).
          • Trans panic was seen as more of a valid defence back then for killing trans people.
          • I think we were seen as more of a curiosity/fetish than people, but that’s debatable since that’s definitely still an issue.
          • People were more afraid of being visibly trans and finding community outside of forums was harder.
          • I was certain I’d lose my job when I inevitably had to come out and had prepared for it by saving up enough to get me through finding another job. I was amazed when that didn’t happen and most of the company accepted me. I still had to deal with harassment that nowadays would probably get those people fired.

          During hostile attention:

          • I had to leave my home due to the state no longer ignoring us and focusing on passing laws to make our lives more difficult.
          • I know a ton of trans people and have a stronger support network. Finding others is easier now.
          • Medical care is easier to get now if you aren’t living in one of the states currently trying to ban HRT.
          • Parents seem a little more accepting but it’s still divisive
          • I’m less afraid of the average person fucking with me in most areas of the US
          • I’m afraid of government attempts to round myself or loved ones up into camps within the next few years.

          Generally, I prefer the visibility and broader social acceptance we have now. More people know about us, so more people hate us but way more people accept us. I see it as how being gay was in the aughts. More people were out and it was less of a big deal even though there was still a lot of hate crimes against gay people. Now it’s way more accepted outside of ultra conservative areas. I’m hoping we are more accepted within a decade instead of being rounded up and killed en masse.

        • KittyCat@lemmy.world
          link
          fedilink
          arrow-up
          9
          ·
          1 day ago

          It probably would depend on if they pass or not. If you fully pass 15-20 years ago probably was much easier in some regards.

        • Lyra_Lycan
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 day ago

          I never experienced hostile ignorance but I do like to be left alone, so I’d vote ignorance over attention.

  • Sludgehammer@lemmy.world
    link
    fedilink
    English
    arrow-up
    189
    ·
    edit-2
    2 days ago

    When I heard that line I was like “Yeah, sure. We’ll never have AI in my lifespan” and you know what? I was right.

    What I wasn’t expecting was for a bunch of tech bros to create an advanced chatbot and announce “Behold! We have created AI, let’s have it do all of our thinking for us!” while the chatbot spits out buggy code and suggests mixing glue into your pizza sauce.

    • donut_delivery@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      1 day ago

      You’re confusing AI and AGI: https://en.wikipedia.org/wiki/AI_effect

      AGI is what people mean, when they say “AI doesn’t exist”: https://en.wikipedia.org/wiki/Artificial_general_intelligence

      While AI is a program that can do a task associated with human intelligence: https://en.wikipedia.org/wiki/Artificial_intelligence

      AI is not supposed to be an artificial human being. AI just does a task that people associated with humans (before they readjusted the definition of intelligence after it being created).

      A bot that plays chess is an AI.

    • ExcessShiv@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      29
      ·
      edit-2
      1 day ago

      AI is an umbrella term that covers many things we’ve already had for a long time, including things like machine learning. This is not a new definition of AI, it’s always been this definition.

      • Avieshek@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        22 hours ago

        You’re not going to achieve AI on classical computers and is simply rebranded for machine learning like how 5G was advertised to bring futuristic utopia back in 2020 only to have 4K being considered a premium feature behind paid subscriptions from 𝕏 (Twitter) to YouTube.

        Quantum Computers do exist but it’s far from being on the palm of your hand.

        • SkyeStarfall
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          1 day ago

          Quantum computers are not going to be used for AI. They are not a mystical technology that is going to make everything better, and you will certainly never have a quantum computer in the palm of your hand

          AGI on classical computers is likely to be viable, but in a roundabout way you’re right, we’re probably going to end up with radically different computers, probably ones that mimic physical brain structure for maximum AI effectiveness. That’s at least a few decades out though, but would likely be viable in our lifetime

          • EldritchFeminity
            link
            fedilink
            arrow-up
            7
            ·
            1 day ago

            AI on classical computers is likely to be viable

            THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.

    • Underfreyja@lemmy.ca
      link
      fedilink
      arrow-up
      37
      ·
      2 days ago

      I work in the gaming industry and every week I receive emails about how AI is gonna revolutionize my job and get sent to time wasting training about how to use Figma AI or other shit like that because it’s the best thing ever according to HR… and it never is obviously.

      At best, it’s gonna make middle managing jobs easier but for devs like me, as long as the “AI” stays out of our engines and stays into the equivalent of cooperative vision boards, it does nothing for me. Not once have I tried to use it for it to turn actually useful. It’s mediocre at best and I can’t believe there are game devs that actually try to code with it, can’t wait to see these hot garbage products come on the market.

      • Serinus@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I’ve been enjoying Copilot quite a bit while developing, particularly for languages that I’m not familiar with. I’m not worried about it replacing me, because I very clearly use my experience and knowledge to guide it and to coax answers out of it. But when you tell it exactly what you want, it’s really nice to get answers back in the development language without needing to look up syntax.

        “Give me some nice warning message css” was an easy, useful one.

        It’s effectively a better Google search.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      I genuinely do not understand these very obviously biased comments. By the very definition of AI, we have had it for decades, and suddenly people say we don’t have it? I don’t get it. Do you hate LLMs so much you want to change the entire definition for AI (and move it under AGI or something)? This feels unhinged, disconnected from reality, biases so strong it looks like delusions

      • jenesaisquoi@feddit.org
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        What is delusional is calling a token generator intelligent. These programs don’t know what the input is, nor do they understand what they put out. They “know” that after this sequence of tokens, what a likely successive token is based on previously supplied data.

        They understand nothing. They generate nothing new. They don’t think. They are not intelligent.

        They are very cool, very impressive and quite useful. But intelligent? Pffffffh

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          23 hours ago

          Why is it so hard for you to understand word “artificial”? It seems like you even avoid it. Just like artificial everything, especially weed and flavours, it’s not the real thing, and was never meant to be the real thing, and yet you’re essentially an old man yelling at cloud because something is artificial and does not act like the real human intelligence

          • jenesaisquoi@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 hours ago

            Artificial means man made, not literally not it

            Like “artificial stone” means “a man made stone-equivalent material”, not a pink fluffy unicorn

            • REDACTED@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              19 hours ago

              I don’t understand what point are you trying to make. Yes, AI, and everything else artificial is man made, I never said it was not. Is it anywhere good as the human intelligence? No, I was also clear about that, so what are you arguing right now? The original argument was whether LLM counts as AI (and existence of AI itself), and by every definition, it does.

        • KittyCat@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          We should steal the term from Mass effect, what we have is early VI, virtual intelligence, not AI.

      • EldritchFeminity
        link
        fedilink
        arrow-up
        3
        ·
        23 hours ago

        This argument pre-dates the modern LLM by several decades. When the average person thinks of AI, they think of Star Wars or any of a myriad of other works of science fiction. Most people have never heard the term in any other context and so are offended by the implied comparison (in their understanding of the word) of LLM models as being equal to Data from Star Trek.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      You won’t have general purpose true AI until it can actually think and reason, llm will never do that. At most they would be a way of interaction with an AI.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      When I heard that line I was like “Yeah, sure. We’ll never have AI in my lifespan” and you know what? I was right.

      Unless you just died or are about to, you can’t really confidently make that statement.

      There’s no technical reason to think we won’t in the next ~20-50 years. We may not, and there may be a technical reason why we can’t, but the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more “neurons” than are in the human brain, let alone the power that will be available to them in 30 years.

        • Match!!@pawb.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          there’s plenty of reason to believe that, whether we have it or not, some billionaire asshole is going to force you to believe and respect his corportate AI as if it’s sentient (while simultaneously treating it like slave labor)

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          There’s plenty of economic reasons to think we will as long as it’s technically possible.

      • 10001110101@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        current large models like ChatGPT model more “neurons” than are in the human brain

        I don’t think that’s true. Parameter counts are more akin to neural connections, and the human brain has something like 100 trillion connections.

      • lowleveldata@programming.dev
        link
        fedilink
        arrow-up
        13
        ·
        2 days ago

        the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching

        Was it? I thought it was always about we haven’t quite figure it out what thinking really is

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          I mean, no, not really. We know what thinking is. It’s neurons firing in your brain in varying patterns.

          What we don’t know is the exact wiring of those neurons in our brain. So that’s the current challenge.

          But previously, we couldn’t even effectively simulate neurons firing in a brain, AI algorithms are called that because they effectively can simulate the way that neurons fire (just using silicon) and that makes them really good at all the fuzzy pattern matching problems that computers used to be really bad at.

          So now the challenge is figuring out the wiring of our brains, and/or figuring out a way of creating intelligence that doesn’t use the wiring of our brains. Both are entirely possible now that we can experiment and build and combine simulated neurons at ballpark the same scale as the human brain.

          • lowleveldata@programming.dev
            link
            fedilink
            arrow-up
            5
            ·
            1 day ago

            Aren’t you just saying the same thing? We know it has something to do with the neurons but couldn’t figure it out exactly how

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 day ago

              The distinction is that it’s not ‘something to do with neurons’, it’s ‘neurons firing and signalling each other’.

              Like, we know the exact mechanism by which thinking happens, we just don’t know the precise wiring pattern necessary to recreate the way that we think in particular.

              And previously, we couldn’t effectively simulate that mechanism with computer chips, now we can.

      • lunarul@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        2 days ago

        There’s no technical reason to think we won’t in the next ~20-50 years

        Other than that nobody has any idea how to go about it? The things called “AI” today are not precursors to AGI. The search for strong AI is still nowhere close to any breakthroughs.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Assuming that the path to AGI involves something akin to all the intelligence we see in nature (i.e. brains and neurons), then modern AI algorithms’ ability to simulate neurons using silicon and math is inarguably and objectively a precursor.

          • lunarul@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            18 hours ago

            Machine learning, renamed “AI” with the LLM boom, does not simulate intelligence. It integrates feedback loops, which is kind of like learning and it uses a network of nodes which kind of look like neurons if you squint from a distance. These networks have been around for many decades, I’ve built a bunch myself in college, and they’re at their core just polynomial functions with a lot of parameters. Current technology allows very large networks and networks of networks, but it’s still not in any way similar to brains.

            There is separate research into simulating neurons and brains, but that is separate from machine learning.

            Also we don’t actually understand how our brains work at the level where we could copy them. We understand some things and have some educated guesses on others, but overall it’s pretty much a mistery still.

  • hypeerror@sh.itjust.works
    link
    fedilink
    arrow-up
    29
    ·
    2 days ago

    We had passed the peak. Limp Bizkit was already popular. Every dude I know that loved that band is now a middle aged incel.

    • Anomalocaris@lemm.ee
      link
      fedilink
      arrow-up
      25
      ·
      2 days ago

      God,

      thought they were cool.

      haven’t heard of them in decades, not until the new Devil May Cry in netflix. enjoyed the nostalgia.

      now I’m afraid I’ve become a middle aged incel.

      someone put me down before it gets worse, why am I suddenly interested in starting a podcast?

      • MDCCCLV@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Fred Durst isn’t a bad singer, he did a pretty good cover of behind blue eyes. Looking at reviews it wasn’t super popular but I thought it was decent.

        • Vanilla_PuddinFudge@infosec.pub
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          Durst is a phenominal hardcore vocalist and I’ve been on this kick for decades.

          “Pollution” off their first record.

          Quick, fast, he enunciates enough through vocal fries and you can still make him out through the chorus. His clean vocals work for their slower, hazier tracks.

          Is he the best? lol, no, that’s Corey Brandan from Norma Jean.

          The only problem is that they became a NuMetal band instead of a hardcore metal band, but that’s more me being selfish for my own taste. I still like Limp. They make fun metal and they’re good at it.

        • zo0@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          I disagree wholeheartedly. Have you listened to the original song? Because that’s also what I thought before hearing the original and honestly they’re not even close.

    • JaymesRS@literature.cafeOP
      link
      fedilink
      English
      arrow-up
      48
      ·
      2 days ago

      The rise of authoritarianism and nationalism is happening in multiple countries like Hungary, Russia, China, and the U.S. Parties like AfD have grown in strength in the last 20 years, thanks in part to the ability of social media companies to prioritize “engagement” to enable them to make more money over societal health.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        I was more talking about how this says “peak” when the 90’s were decidedly not peak for a lot of the world.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 day ago

            That was just a sensationalized story - they sealed the extra doors on a building but left one open, so they could do controlled screening. I think it was actually just that one building too. They didn’t weld people inside buildings to die 🙄

            As a result they had one of the best COVID responses in the world and saved countless lives.

        • explodicle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I think there’s still hope for them. In the last 20 years China and Russia have become a lot more like us, too. There are decades when nothing happens, and weeks where decades happen.

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            23 hours ago

            I personally don’t see anything happening without some sort of massive revolution. The problem with revolutions is they almost always just replace bad governments with dictators. It really makes me appreciate people like George Washington. Also with China there’s the tiny issue of trying to fight one worlds most powerful governments.

            The other problem is that China and Russia brain wash there citizens to an extreme level. From birth they carefully restrict the truth in order to keep the current government in power. This probably means that most people in those countries would be fine with authoritarianism as they have never experienced full democratic liberties.