The Singularity is a hypothetical future event where technology growth becomes uncontrollable and irreversible, leading to unpredictable transformations in our reality[1]. It’s often associated with the point at which artificial intelligence surpasses human intelligence, potentially causing radical changes in society. I’d like to know your thoughts on what the Singularity’s endgame will be: Utopia, Dystopia, Collapse, or Extinction, and why?

Citations:

  1. Singularity Endgame: Utopia, Dystopia, Collapse, or Extinction? (It’s actually up to you!)

  1. https://www.techtarget.com/searchenterpriseai/definition/Singularity-the ↩︎

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I would imagine that the biological phase for intelligent life is rather short, and I expect that in the long run intelligence will transition to post biological substrates.

    I’d argue that inventions of language and writing are the landmark moments in human development. Before language was invented the only way information could be passed down from ancestors to offspring was via mutations in our DNA. If an individual learned some new idea it would be lost with them when they died. Language allowed humans to communicate ideas to future generations and start accumulating knowledge beyond what a single individual could hold in their head. Writing made this process even more efficient.

    When language was invented humans started creating technology, and in a blink of an eye on cosmological scale we went from living in caves to visiting space in our rocket ships. It’s worth taking a moment to really appreciate just how fast our technology evolved once we were able to start accumulating knowledge using language and writing.

    Our society today is utterly and completely unrecognizable to somebody from even a 100 years ago. If we don’t go extinct, I imagine that in another thousand years future humans or whatever succeeds us will be completely alien to us as well. It’s pretty hard to predict what that would look like given where we are now.

    With that caveat, I think we can make some assumptions such as that future intelligent life will likely exist in virtual environments running on computing substrates because such environments could operate at much faster speeds than our meat brains, and what we consider real time would be seem like geological scale from that perspective. Given that, I can’t see why intelligences living in such environments would pay much attention to the physical world.

    I also think that we’re likely to develop human style AIs within a century. It’s hard to predict such things, but I don’t think there’s anything magic regarding what our brains are doing. There are a few different paths towards producing a human style artificial intelligence.

    The simplest approach could be to simply evolve one. Given a rich virtual environment, we could run an evolutionary simulation that would select for intelligent behaviors. This approach doesn’t require us to understand how intelligence works. We just have to create a set of conditions that select for the types of intelligent behaviors we’re looking for. This is a brute force approach for creating AGI.

    Another approach could be to map out the human brain down to neuron level and create a physics simulation that would emulate a brain. We aren’t close to being able to do that technologically yet, but who knows what will happen in the coming decades and centuries.

    Finally, we might be able to figure out the algorithms that mimic what our brains do, and build AIs based on that. This could be the most efficient way to build an AI since we’d understand how and why it works which would facilitate rapid optimization and improvement.

    My view is that if we made an AI that had human style consciousness then it should be treated as a person and have the same rights as a biological human. While we could never prove that an AI has internal experience and qualia, I think that morally we have to err on the side of trusting the AI that claims to have consciousness and self awareness.

    I expect that post biologicals will be the ones to go out and explore the universe. Meat did not evolve to live in space because we’re adapted to gravity wells. An artificial life form could be engineered to thrive in space without ever needing to visit planets. This is the kind of life that’s most likely to be prolific in space.

    One of the best sci fi novels I’ve read on the subject would be Diaspora by Greg Egan. It seems like a plausible scenario for the future of humanity.