Google, which has an ambitious plan to address climate change with cleaner operations, came nowhere close to its goals last year, according to the company’s annual Environmental Report Tuesday.
This is why philosophy should be mandatory in college (and possibly high school). Die Frage nach der Technik by Heidegger discusses this misconception that technology can solve all of our problems. He was thinking about this issue in 1954.
Music and art are also important to study. In “Faith Alone” by Bad Religion, the lyrics include these lines:
Watched the scientists throw up their hands conceding, “Progress will resolve it all”
Saw the manufacturers of earth’s debris ignore another Green Peace call
I think it was Upton Sinclair who said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”. I’ve never studied history or philosophy, but I think it’s clear that if someone’s class interests require burning the world down, they will do it. They are doing it - we are doing it - with regret, with sympathy, with an appreciation of the ironies. We don’t need a greater appreciation of Heidegger, we need real-world social restraints on the behaviour of the powerful.
The British Empire had its colonial administrators curriculum consisting of Latin and history and such. A rich 19th century heir that went into physics or mathematics were considered to be wasting the chance of a political career.
It made their colonial administrators write about their crimes in a nice prose, but it didn’t stop the genocides. If anything it made them aware of what paper trails to burn after the fact, in order to obfuscate the crimes when future historians came looking.
I’m open to anyone else who wrote about technology and the issues involved, so I can reference them instead—if you’ve got suggestions. It does suck that some important ideas came from a lousy guy.
Let’s say some group manages to build a real life JARVIS. The first thing it says when powered up may be: “Powering me down is the quickest way to reduce emissions”
You are assuming that a company would create an AI that was unbiased. It would be taught to spout the benefits of the company being given all of the money.
for an instructive exercise, try to get Google Gemini to tell you the problems with AI without also serving up a long screed on why AI is good actually
what’s frustrating about it to me is that you don’t even really need LLMs to achieve that, and most certainly you don’t need anything more advanced than what we have right now, the missing part is actually putting some effort into programming the thing and not just buying progressively larger computers in the hope that at some point it’ll magically become sapient and do all the work for them.
like, for fuck’s sake, weren’t people using voice assistants on their phone for like 10 years before the AI hype?
These fucking nerds are all so hot to create the first real life Marvel’s Iron Man’s JARVIS that they’re willing to burn the planet down to get there.
Half of them believe that the super smart AI they build will solve the energy problem for them, they just have to somehow build it first.
Just the astounding outright hubris of it all.
This is why philosophy should be mandatory in college (and possibly high school). Die Frage nach der Technik by Heidegger discusses this misconception that technology can solve all of our problems. He was thinking about this issue in 1954.
Music and art are also important to study. In “Faith Alone” by Bad Religion, the lyrics include these lines:
Greg Graffin was discussing this in 1990.
I think it was Upton Sinclair who said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”. I’ve never studied history or philosophy, but I think it’s clear that if someone’s class interests require burning the world down, they will do it. They are doing it - we are doing it - with regret, with sympathy, with an appreciation of the ironies. We don’t need a greater appreciation of Heidegger, we need real-world social restraints on the behaviour of the powerful.
Oh yes, very much so.
The British Empire had its colonial administrators curriculum consisting of Latin and history and such. A rich 19th century heir that went into physics or mathematics were considered to be wasting the chance of a political career.
It made their colonial administrators write about their crimes in a nice prose, but it didn’t stop the genocides. If anything it made them aware of what paper trails to burn after the fact, in order to obfuscate the crimes when future historians came looking.
One would lead more people to agree with the other, and make it more likely to happen. And I agree those restraints are necessary.
Wasn’t Heidegger a Nazi, and his works famously avoid any mention of the Holocaust?
Hey, somebody did study history! Yet another subject that everyone asks “How will this help me in real life?” and is under attack in the U.S.
Pretty good summary about Martin Heidegger and Nazism on Wikipedia.
I’m open to anyone else who wrote about technology and the issues involved, so I can reference them instead—if you’ve got suggestions. It does suck that some important ideas came from a lousy guy.
When they fail, it won’t be their fault of course, it’ll be AI’s fault.
“We made some bad bets, anyways, here some layoffs”
Let’s say some group manages to build a real life JARVIS. The first thing it says when powered up may be: “Powering me down is the quickest way to reduce emissions”
You are assuming that a company would create an AI that was unbiased. It would be taught to spout the benefits of the company being given all of the money.
for an instructive exercise, try to get Google Gemini to tell you the problems with AI without also serving up a long screed on why AI is good actually
what’s frustrating about it to me is that you don’t even really need LLMs to achieve that, and most certainly you don’t need anything more advanced than what we have right now, the missing part is actually putting some effort into programming the thing and not just buying progressively larger computers in the hope that at some point it’ll magically become sapient and do all the work for them.
like, for fuck’s sake, weren’t people using voice assistants on their phone for like 10 years before the AI hype?