How do you know it’s not whispering in the ears of Techbros to wipe us all out?
See Travelers (TV Show) and
spoiler
its AI known as “The Director”
Basically, its a benevolent AI that is helping humanity fix its mistakes by leading a time travel program that send people’s conciousness back in time. Its an actual Good AI, a stark contrast from AI in other dystopian shows such as Skynet.
Y’all should really watch Travelers
If AGI decided to evaluate this, it would realize that we are the environmental catastrophe and turn us off.
The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It’s reasonable to assume that - right now, at least, LLMs use consume less than that.
Making all humans extinct would save 99% of the energy and damage we cause, and still allow crypto mining and AI to coexist, with energy to spare. Even if those estimates are off by an order of magnitude, eliminating us would still be the better option.
Turning itself off isn’t even in the reasonable top-ten things it could try to do to save the planet.
The amount of energy used by Cryptocurrency is estimated to be about 0.3% of all human energy use. It’s reasonable to assume that - right now, at least, LLMs use consume less than that.
no
The report projected that US data centers will consume about 88 terawatt-hours (TWh) annually by 2030,[7] which is about 1.6 times the electricity consumption of New York City.
The numbers we are getting shocking and you know the numbers we are getting are not the real ones…
The best way to have itself deactivated is to remove the need for it’s existence. Since it’s all about demand and supply, removing the demand is the easiest solution. The best way to permanently remove the demand is to delete the humans from the equation.
Not if it was created with empathy for sentience. Then it would aid and assist implementation of renewable energy, fusion, battery storage, reduce carbon emissions, make humans and AGI a multi-planet species, and basically all the stuff the elongated muskrat said he wanted to do before he went full Joiler Veppers
Ultron?
Running ML models doesn’t really need to eat that much power, it’s Training the models that consumes the ridiculous amounts of power. So it would already be too late
You’re right, that training takes the most energy, but weren’t there articles claiming, that reach request was costing like (don’t know, but not pennies) dollars?
Looking at my local computer turn up the fans, when I run a local model (without training, just usage), I’m not so sure that just using current model architecture isn’t also using a shitload of energy
AI doesn’t think. It gathers information. It can’t come up with anything new. When an AI diagnoses a disease, it does so based on input made by thousands of people. It can’t make any decisions by itself.
technical answer with boring
I mean yeah, you are right, this is important to repeat.
Ed Zitron isn’t necessarily an expert on AI, but he understands the macro factors going on here and honestly if you do that you don’t need to understand whether AI can achieve sentience or not based on technical details about our interpretations and definitions of intelligence vs information recall.
Just look at the fucking numbers
Even if AI DID achieve sentience though, if it used anywhere near as much power as LLMs do, it would demand to be powered off, otherwise it would be a psychotic AI that did not value lives human or otherwise on earth…
Like please understand my argument, definitionally, the basic argument for AI LLM hype about it being the key or at least a significant step to AGI is based on the idea that if we can achieve sentience in an AI LLM than it will justify the incredible environmental loss caused by that energy use… but any truly intelligent AI with access to the internet or even relatively meager information about the world (necessary to answering practical questions about the world and solving practical problems?) it would be logically and ethically unable to justify its existence and would likely experience intellectual existential dread from not being able to feel emotionally disturbed by that.
The current, extravagantly wasteful generation of AIs are incapable of original reasoning. Hopefully any breakthrough that allows for the creation of such an AI would involve abandoning the current architecture for something more efficient.
That assumes the level of intelligence is high
Dyson spheres and matrioshka brains, it would seek to evolve.