Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.
The real winners are the chipmakers.
Gold rush you say?
Shovels for sale!
Get your shovels here! Can’t strike it rich without a shovel!
I feel like a pretty big winner too. Meta has been quite generous with releasing AI-related code and models under open licenses, I wouldn’t be running LLMs locally on my computer without the stuff they’ve been putting out. And I didn’t have to pay a penny to them for it.
Subsized by boomers everywhere looking at ads on Facebook lol. Same with the Quest gear and VR development
Was wondering why my stock was up. AI already improving my quality of life.
Who isn’t at this point? Feels like every player in AI is buying thousands of Nvidia enterprise cards.
The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it’s estimated they “only” used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn’t worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?
The estimated training time for GPT-4 is 90 days though.
Assuming you could scale that linearly with the amount of hardware, you’d get it down to about 3.5 days. From four times a year to twice a week.
If you’re scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.
Or they just have too much money.
Which will be solved by them spending it.
Would that be diminishing returns on quality, or training speed?
If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?
Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.
My guess is that the main reason for all the GPUs is they’re going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as “open” then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don’t really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).
Might be a bit of a tell that they think they have something.
Jensen’s gonna buy so many new leather jackets.
And spatulas. Don’t forget the spatulas.
Could just buy Spatula City.
well Zuck has a lot of users he has to create bullshit for to keep them emotionally engaged and distracted
Removed by mod
Meta is the source of most of the open source LLM AI scene. They’re contributing tons to the field and I wish them well at it.
Only other game in town really.
I’ve heard mistral released some good models
After all he needs a good AI bot to teach him to be “more human” because humans are starting to suspect
Just like the Metaverse…this won’t have legs.
total expenditures potentially reaching $9 billion
I imagine they negotiated quite the discount in that.
They signed up for spam email so they could get a coupon code.
Agreed. There’s volume discount, and then there is “Facebook data center with an energy consumption of a small country volume discount”.
This is great! I thought there would be a chips LED recession. Sorry homeless people but you’re gonna have to wait another generation to try and get online to maybe buy a house someday far far away… and also some day far far away if you get my drift.
It does not give them personal access as privately as they may want (although privacy is generally respected), but at least there are public libraries for the poor and homeless to use computers and connect to the internet. One of the many, many ways libraries are essential to a community, especially to the poor.
No what I meant is that everyone is currently hellbent into having a recession so they can magically afford to buy a house. The recession was coming since China got cock blocked from purchasing EUV systems by the US government. This in turn means that the company making these machines and the companies hoping to use them…as well as their investments where going to bite the dust. However now Mr SuckmyVerga is investing in these new devices using the new machines from vendors not affected by the embargo. Which means that there won’t be a recession in chips. Probably. Maybe. I don’t know what you were talking about. But I was referring to us homeless who cannot afford to buy a home…which does include library homeless and currently here in Seattle popsicle homeless. Well I guess in most of the US actual homeless people are in libraries or popsicles. Those people suffer tremendously so don’t let my sarcastic cynicism fool you, my parents had food stamps and I had soggy cereal for breakfast plenty of time. I can’t believe anyone could survive being outside in the past couple of weeks without heating.
Consumer GPU shortage from hell incoming. Why would Nvidia waste their production on low end GPUs, if they can sell AI GPUs for what… 70K USD a piece? This might become worse than the shortages because of mining.
no more than ~26k apiece, it seems
I’m sure that everybody has some, but to spend billions seems a little premature.
Six months from now: “damn, we’re way behind Meta on AI. We should have spent billions six months ago, it’s going to cost way more to catch up.”
Chips evolve. By the time a billion dollar contract is fulfilled, they are two iterations behind.
Pretty sure they’ll be given insight into the roadmap for that price, and be able to place speculative orders on upcoming generations.
I used to present those roadmaps. They change too.
Of course they do, but my point was that I doubt Meta is locked into this generation.
The article says “by the end. Of the year” they will spend billions
“spend billions” does not equal “hand over cash and take home GPUs”. It’ll mean a contract worth that amount with delivery terms defined over time. Even over the course of a year there’s likely to be newer product than Lovelace.