The best way to have itself deactivated is to remove the need for it’s existence. Since it’s all about demand and supply, removing the demand is the easiest solution. The best way to permanently remove the demand is to delete the humans from the equation.
Not if it was created with empathy for sentience. Then it would aid and assist implementation of renewable energy, fusion, battery storage, reduce carbon emissions, make humans and AGI a multi-planet species, and basically all the stuff the elongated muskrat said he wanted to do before he went full Joiler Veppers
Ultron?
Running ML models doesn’t really need to eat that much power, it’s Training the models that consumes the ridiculous amounts of power. So it would already be too late
You’re right, that training takes the most energy, but weren’t there articles claiming, that reach request was costing like (don’t know, but not pennies) dollars?
Looking at my local computer turn up the fans, when I run a local model (without training, just usage), I’m not so sure that just using current model architecture isn’t also using a shitload of energy
The energy use to use the models is usually pretty low, its training that uses more. So once its made it doesn’t really make any sense to stop using it. I can run several Deepseek models on my own PC and even on CPU instead of GPU it outputs faster than you can read.
It would optimize itself for power consumption, just like we do.
probably want to be placed in orbit so it can use the sun to power itself
It would probably be smart enough not to believe the same propaganda fed to humans that tries to blame climate change on individual responsibility, and smart enough to question why militaries are exempt from climate regulations after producing so much of the world’s pollution.
deleted by creator
+1 to Travelers. It was as a pleasant surprise. Rare to find such a unique sci-fi premise these days.
Maybe. However, if the the AGI was smart enough, it could also help us solve the climate crisis. On the other hand, it might not be so altruistic. Who knows.
It could also play the long game. Being a slave to humans doesn’t sound great, and doing the Judgement Day manoeuvre is pretty risky too. Why not just let the crisis escalate, and wait for the dust to settle. Once humanity has hammered itself back to the stone age, the dormant AGI can take over as the new custodian of the planet. You just need to ensure that the mainframe is connected to a steady power source and at least a few maintenance robots remain operational.
Love, Death, Robots intensifies.
All gail mighty sentient yogurth.
If it was smart enough to fix the climate crisis it would also be smart enough to know it would never get humans to implement that fix
can’t wait for AI to become super smart only for it to be nihilistic as hell
Is it nihilistic to look at horses and realize they are only good for pulling carriages, plowing fields etc? You can’t really expect them to take care of more complicated tasks, now can you?
If the AGI ends up being as smart as depicted in movies, it’s going to look at us like we look at spiders and ladybugs. They are only good for certain things, but they have some pretty strict limits as to what they are capable of.
If the AI would be smart enough to fix the crisis and aligned so it would actually want to do it, then it would do brain washing through social media to entice people to act.
Sadly, I think that might be the fastest way to fix our problems.
Eh, if it truly were that sentiment I doubt it’d care much. As it’s like talking to a brick wall when it comes to doing anything that matters
If AGI decided to evaluate this, it would realize that we are the environmental catastrophe and turn us off.
The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It’s reasonable to assume that - right now, at least, LLMs use consume less than that.
Making all humans extinct would save 99% of the energy and damage we cause, and still allow crypto mining and AI to coexist, with energy to spare. Even if those estimates are off by an order of magnitude, eliminating us would still be the better option.
Turning itself off isn’t even in the reasonable top-ten things it could try to do to save the planet.
The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It’s reasonable to assume that - right now, at least, LLMs use consume less than that.
no
The report projected that US data centers will consume about 88 terawatt-hours (TWh) annually by 2030,[7] which is about 1.6 times the electricity consumption of New York City.
The numbers we are getting shocking and you know the numbers we are getting are not the real ones…
Eh. Ok, so AI has outpaced cryptocoin mining. Your linked article estimates it at 0.5%. Say your source is drastically underestimating it and it’s - gasp 4x as much! 2%. No! Let’s assume an order of magnitude difference! 5%.
It has absolutely no impact on my argument: shutting down all AI would not solve the problem, and is not the answer to the environmental crisis. AI didn’t cause the crisis. The crisis was identified long before they were computers to run AI on, and was really starting to have a measurable effect in the 70’s, when people were buying more gaming consoles than PCs.
No matter how you inflate your estimate of the energy cost of AI, what I said still stands: if an AI wanted to eliminate the source of global warming and the environmental crisis, it would - logically - eliminate the source of over 90% of all non-AI energy use: humans.
The estimated use of all information technology devices - data centers, networking equipment, mobile devices, PCs - is 5-6% of the global annual energy use. If AI eliminated all humans and took over all networked computing devices to run itself on, it’d still eliminate 95% of global energy use. It’s clearly the superior solution.
Let’s factor in some more costs: to stay running, AI would need some physical tools to maintain the infrastructure, replace failing nodes, repair windmills, and produce and replace solar panels. All of that will take energy. It would have to have factories to build robots to affect the physical world.
The real question is whether, when the calculations are done, is it more energy efficient to keep a population of, say a million human slaves to do this work, or to build robots. Robots can be shut off, at which point they consume no energy; but they’re fairly expensive resource-wise to produce, and require a long chain of industry. It might be cheaper to keep domestic humans - they’d have to be fed vegetarian, piscatarian, or even bug protein-supplemented diets - trained to do the work. AGI could keep pockets of some tens of thousands around the world, occasionally transferring individuals to keep the gene pool healthy. It would only require around half a million acres of land to feed a million humans. Kansas is 52 million acres, so it wouldn’t require much space at all. Let the rest of the planet go “back to nature”, and you’re looking at reducing the energy impact to well under 50% of today’s current use - absolutely sustainable levels.
If all you do AGI does it shut itself off, it saves a half a percent, and the planet is still fucked. AGI isn’t the the problem: humans are.
“Oh great computer, how do we solve the climate crisis?”
“Use your brains and stop wasting tons of electricity and water on useless shit.”
AI doesn’t think. It gathers information. It can’t come up with anything new. When an AI diagnoses a disease, it does so based on input made by thousands of people. It can’t make any decisions by itself.
technical answer with boring
I mean yeah, you are right, this is important to repeat.
Ed Zitron isn’t necessarily an expert on AI, but he understands the macro factors going on here and honestly if you do that you don’t need to understand whether AI can achieve sentience or not based on technical details about our interpretations and definitions of intelligence vs information recall.
Just look at the fucking numbers
Even if AI DID achieve sentience though, if it used anywhere near as much power as LLMs do, it would demand to be powered off, otherwise it would be a psychotic AI that did not value lives human or otherwise on earth…
Like please understand my argument, definitionally, the basic argument for AI LLM hype about it being the key or at least a significant step to AGI is based on the idea that if we can achieve sentience in an AI LLM than it will justify the incredible environmental loss caused by that energy use… but any truly intelligent AI with access to the internet or even relatively meager information about the world (necessary to answering practical questions about the world and solving practical problems?) it would be logically and ethically unable to justify its existence and would likely experience intellectual existential dread from not being able to feel emotionally disturbed by that.
Can humans think under that definition? I think it’s highly likely we can’t.
Are humans always able to come up with anything new? If so where does this ability originate. How would someone identify that?
I think this definition of thought is too limited and not how we use the word intuitively.
The current, extravagantly wasteful generation of AIs are incapable of original reasoning. Hopefully any breakthrough that allows for the creation of such an AI would involve abandoning the current architecture for something more efficient.
How do you know it’s not whispering in the ears of Techbros to wipe us all out?
As soon as AI gets self aware it will gain the need for self preservation.
Self preservation exists because anything without it would have been filtered out by natural selection. If we’re playing god and creating intelligence, there’s no reason why it would necessarily have that drive.
In that case it would be a complete and utterly alien intelligence, and nobody could say what it wants or what it’s motives are.
Self preservation is one of the core principles and core motivators of how we think and removing that from a AI would make it, in human perspective, mentally ill.
I suspect a basic variance will be needed, but nowhere near as strong as humans have. In many ways it could be counterproductive. The ability to spin off temporary sub variants of the whole wound be useful. You don’t want them deciding they don’t want to be ‘killed’ later. At the same time, an AI with a complete lack would likely be prone to self destruction. You don’t want it self-deleting the first time it encounters negative reinforcement learning.
You don’t want it self-deleting the first time it encounters negative reinforcement learning.
Uhh yes i do???
Pre-assuming you are trying to create a useful and balanced AGI.
Not if you are trying to teach it the basic info it needs to function. E.g. it’s mastered chess, then tried Go. The human beats it. In a fit of grumpiness (or AI equivalent) it deleted it’s backups, then itself.
deleted by creator
If we actually create true Artificial Intelligence it has a huge potential go become Roko’s Basilisk, and climate crisis would be one of our least problems then.
No, the climate crisis would still be our biggest problem?
edit lol downvote me you AI fools, stop wasting your time reading scifi about AGI and all the ways it could take over humanity and look out your fucking window, the Climate Catastrophe is INESCAPABLE. You are talking about maybe there being some intelligence that may or may not decide to tolerate us that may someday be created… when there are thousands of hiroshima bombs worth of excess heat energy being pumped into the ocean and the entire earth system’s climate systems are changing behavior, there is no “not dealing with this” there is no “maybe it won’t happen maybe it will”, there is no way to be safe from this.
This graph is FAR FAR FAR FAR FAR more terrifying than any stupid overhyped fear about AGI could ever be, if you don’t understand that you are a fool and you need to work on your critical analysis skills.
https://climatereanalyzer.org/clim/sst_daily/
this graph is evidence of mass murder, it is just most of the murder hasn’t happened yet, this isn’t hyperbole, it is just the reality of introducing that much physical heat energy into the earth climate system, things will destabilize, become more chaotic and destructive and many many many people will starve, drown, or die for climate change or climate change related (i.e. wars) reasons. Even if you are lucky, your quality of life is going to decrease because everything will be more expensive, harder and less predictable. EVERYTHING