Qwen 2.5 is already amazing for a 14B, so I don’t see how deepseek can improve that much with a new base model, even if they continue train it.
Perhaps we need to meet in the middle, and have quad channel APUs like Strix Halo become more common, and maybe release like 40-80GB MoE models. Perhaps bitnet ones?
Or design them for asynchronous inference.
I just don’t see how 20B-ish models can perform like one orders of magnitude bigger without a paradigm shift.
nVidia’s new Digits workstation, while expensive from a consumer standpoint, should be a great tool for local inferencing research. $3000 for 128GB isn’t a crazy amount for a university or other researcher to spend, especially when you look at the price of the 5090.
Why would you buy a single use behemoth when you can buy a strix halo 128GB that can work as an actual tablet/laptop and have all the functionality of the behemoth?! while supporting decades of legacy x86 software. Truly wondering why anyone would buy that NVIDIA thing other than pure ignorance and marketing says NV is the AI company.
Qwen 2.5 is already amazing for a 14B, so I don’t see how deepseek can improve that much with a new base model, even if they continue train it.
Perhaps we need to meet in the middle, and have quad channel APUs like Strix Halo become more common, and maybe release like 40-80GB MoE models. Perhaps bitnet ones?
Or design them for asynchronous inference.
I just don’t see how 20B-ish models can perform like one orders of magnitude bigger without a paradigm shift.
nVidia’s new Digits workstation, while expensive from a consumer standpoint, should be a great tool for local inferencing research. $3000 for 128GB isn’t a crazy amount for a university or other researcher to spend, especially when you look at the price of the 5090.
Why would you buy a single use behemoth when you can buy a strix halo 128GB that can work as an actual tablet/laptop and have all the functionality of the behemoth?! while supporting decades of legacy x86 software. Truly wondering why anyone would buy that NVIDIA thing other than pure ignorance and marketing says NV is the AI company.
Dense models that would fit in 100-ish GB like mistral large would be really slow on that box, and there isn’t a SOTA MoE for that size yet.
So, unless you need tons of batching/parallel requests, its… kinda neither here nor there?
As someone else said, the calculus changes with cheaper Strix Halo boxes (assuming those mini PCs are under $3K).