Technically the technology is open to the public but regular people cannot afford to implement it.
The thing that makes Large Language Models hardly functional is scaling up their databases and processing power of one of several of their small models with specialized tasks. One model creates output from input, another model checks it for accuracy/coherency, a third model polices it for things that are not allowed.
So unless you’ve got a datacenter and three high powered servers with top-grade cooling systems and a military grade power supply, fat fucking chance.
I can run a small LLM on my 3060, but most of those models were originally trained on a cluster of a100s (maybe as few as 10, so more like one largish server than one datacenter)
Bitnet came out recently and is looking like it will lower these requirements significantly (essentially training a model using ternary numbers instead of floats to reduce requirements, which turns out to not lower the quality that significantly)
I like both. Where’s the FOSS AI?
Technically the technology is open to the public but regular people cannot afford to implement it.
The thing that makes Large Language Models hardly functional is scaling up their databases and processing power of one of several of their small models with specialized tasks. One model creates output from input, another model checks it for accuracy/coherency, a third model polices it for things that are not allowed.
So unless you’ve got a datacenter and three high powered servers with top-grade cooling systems and a military grade power supply, fat fucking chance.
I can run a small LLM on my 3060, but most of those models were originally trained on a cluster of a100s (maybe as few as 10, so more like one largish server than one datacenter)
Bitnet came out recently and is looking like it will lower these requirements significantly (essentially training a model using ternary numbers instead of floats to reduce requirements, which turns out to not lower the quality that significantly)
They should do to AI what they make me do at work: More with less.
Basically Mistral, check /lmg/ in /g/, if you have a GPU newer than 2 years you can probably run a 32B quantised model.
Haha try the entire datacenter.
If LLM was practical on three servers everyone and their mum would have an AI assistant product.