• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      21
      ·
      1 day ago

      What they’re actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      21 hours ago

      I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?

      Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.

      • azron@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        15 hours ago

        YouTube-connected the right track still. All these people touting it as an open model likely haven’t even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        19 hours ago

        Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.