Longtermism poses a real threat to humanity

https://www.newstatesman.com/ideas/2023/08/longtermism-threat-humanity

“AI researchers such as Timnit Gebru affirm that longtermism is everywhere in Silicon Valley. The current race to create advanced AI by companies like OpenAI and DeepMind is driven in part by the longtermist ideology. Longtermists believe that if we create a “friendly” AI, it will solve all our problems and usher in a utopia, but if the AI is “misaligned”, it will destroy humanity…”

@technology

  • @laylawashere44
    link
    1611 months ago

    A major problem with longterminism is that it presumes to speak for future people who are entirely theoretical, who’s needs are entirely impossible to accurately predict. It also depriorites immediate problems.

    So Elon Musk is associated with Longterminism (self proclaimed). He might consider that interplanetary travel is in best interest of mankind in the future (Reasonable). As a longtermist he would then feel a moral responsibility to advance interplanetary travel technology. So far, so good.

    But the sitch is that he might feel that the moral responsibility to advance space travel via funding his rocket company is far more important that his moral responsibility to safeguard the well being of his employees by not overworking them.

    I mean after all yeah it might ruin the personal lives and of a hundred, two hundred, even a thousand people, but what’s that compared to the benefit advancing this technology will bring to all mankind? There are going to be billions of people befitting from this in the future!

    But that’s not really true. Because we can’t be certain that those billions of people will even exist let alone benefit. But the people suffering at his rocket company absolutely do exist and their suffering is not theoretical.

    The greatest criticism of this line of thought is that it gives people, or at the moment, billionaires permission to do whatever the fuck they want.

    Sure flying on a private jet is ruinous to the environment but I need to do it so I can manage my company which will create an AI that will make everything better…

      • @laylawashere44
        link
        15
        edit-2
        11 months ago

        Because it gives powerful people permission to do whatever they want, everyone else be damned.

        Both of the two major Longtermist philophers casually dismiss climate change in their books for example (I have Toby Ord’s book which is apparently basically the same as William Mckaskils book but first and better, supposedly). As if it’s something that can be just solved by technology in the near future. But what if it isn’t?

        What if we don’t come up with fusion power or something and solving climate change requires actual sacrifices that had to be made 50 years before we figured out fusion isn’t going to work out. What if the biosphere actually collapses and we can’t stop it. That’s a solid threat to humanity.

        • @wahming@monyet.cc
          link
          fedilink
          711 months ago

          No, it gives them a justification to do so. But is that actually any different from any other belief system? Powerful assholes have always justified their actions using whatever was convenient, be it religion or otherwise. What makes longtermism worse, to the extent it’s a threat to humanity when everything else isn’t?

          • @Thrashy@beehaw.org
            link
            fedilink
            10
            edit-2
            11 months ago

            Along the lines of @AnonStoleMyPants – the trouble with longtermism and effective altruism generally is that, unlike more established religion, it’s become en vogue specifically amongst the billionaire class, specifically because it’s essentially just a permission structure for them to hoard stacks of cash and prioritize the hypothetical needs of their preferred utopian vision of the future over the actual needs of the present. Religions tend to have a mechanism (tithing, zakat, mitzvah, dana, etc.) for redistributing wealth from the well-off members of the faith towards the needy in an immediate way. Said mechanism may often be suborned by the religious elite or unenforced by some sects, but at least it’s there.

            Unlike those religions, effective altruism specifically encourages wealthy people to keep their wealth to themselves, so that they can use their billionaire galaxy brains to more effectively direct that capital towards long-term good. If, as they see it, Mars colonies tomorrow will help more people than healthcare or UBI or solar farms will today, then they have not just a desire, but a moral obligation to spend their money designing Mars rockets instead of paying more taxes or building green infrastructure. And if having a longtermist in charge of said Mars colony will more effectively safeguard the future of those colonists, then by golly, they have a moral obligation to become the autocratic monarch of Mars! All the dirty poors desperate for help today aren’t worth the resources relative to the net good possible by securing that utopian future they imagine.

          • AnonStoleMyPants
            link
            fedilink
            211 months ago

            Don’t think so personally. The only reason might be that tech billionaires probably think it is more “their thing” than religion or whatever. Hence, quite bad.