What they’re actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.
Exactly, and these kinds of policies will only ensure that the west starts falling behind technologically. Stifling innovation to prop up monopolies will not be a winning strategy in the long run.
I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?
Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.
YouTube-connected the right track still. All these people touting it as an open model likely haven’t even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.
Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.
Just download the model. Problem solved
What they’re actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.
https://securityconversations.com/episode/inside-the-deepseek-ai-existential-crisis-chinese-backdoor-in-medical-devices/ If you ignore the kind of laent anti China crap, this is a pretty good analysis from a technical perspective. When someone does something faster and cheaper we used to call that progress. Not if China does it I guess, and not if it’s open source even if Meta did the same thing with llama.
Exactly, and these kinds of policies will only ensure that the west starts falling behind technologically. Stifling innovation to prop up monopolies will not be a winning strategy in the long run.
I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?
Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.
YouTube-connected the right track still. All these people touting it as an open model likely haven’t even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.
Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.