People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed impossible.
And a select few have massively profited from it while labour produces more and more without increasing compensation
He’s using the “exclusive we.”
American oligarchs really believe their shit doesn’t smell.
Give it a bit, his empire of hype is burning money and generating very very little income comparatively.
Oh, I think Altman is smart enough to develop contingency plans to maximize benefits for himself at the peak of the hype and leave someone else holding his bags. He is a grifter, a conman, he will say his grandmother is fat ugly whore is he think he can benefit from it while “managing” the PR impact.
That being said, the contrast between the comically bombastic statements about AI utopia (that clearly benefit him financially) and the teenage-level presentation and research (the topics he brings up is serious, it is not enough to shit out a low effort blog post) is a sight to behold.
In a new manifesto, OpenAI’s Sam Altman…
LOL. I jokingly asked a few days ago if well-adjusted people ever write manifestos, and the answer is still “no”.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
Are you sure about that Sam? Because one, you’re the snake oil salesman writing this and I wouldn’t trust you as far as I can throw you, and two, yeah maybe it scales predictably but the prediction is that training the next generation for marginal improvement will cost an exponential 100 billion (and that is taking your Microsoft discount for compute into account). You’re hitting a wall hard and the profits are still not in sight. This avenue of progress is a dead end and Sam knows it, because OpenAI is selling PPU’s instead of stock and looking to Saudi investment. Don’t get stuck with the bag folks, the few thousand days Sam claims to need aren’t survivable.
First of all, you make a great point.
Second of all, that quote made me laugh out loud. “In 15 words”? Why is that even there? I saw Sam sitting there with Word open, cursor blinking at the end of his sentence about how deep learning worked, wondering how to make it more impactful. So he copies the sentence and pastes it into a chatgpt box and asks, “how can I make this hit differently?” and chatgpt, in all it’s gptness, responds: “try counting the number of words in the sentence and throwing that in front.”
Go back to jail. No pass go
“Wouldn’t it be great if everybody gave my AI company money?”
“For doing what?”
“… I don’t follow.”
What scares me is that con men and delusional idiots are the ones making the decisions about AI. Like biological weapons development, this is an area where unintended consequences have the potential to destroy mankind. And it is in the hands of people who have demonstrated that they will fire anyone who wants to slow them down by examining the risks and the underlying ethics of what they are doing.
Altman is the most obviously terrible example of someone who should never be allowed near this technology, but his counterparts at Google, IBM, Apple, and the other tech giants are nearly as bad. They want the fame, money, and power this could bring them. None of them are looking out for the good of humanity as a whole.
I firmly believe that our best hope, at least for the moment, is that general AI is going to take longer than they think. We are not going to achieve it by building more powerful versions of what we have now. It will require something new and different. By the time that breakthrough happens, we need to have responsible people managing it.
the good of humanity as a whole
My headcanon is that the Borg in Star Trek began with an AGI with that exact main goal.