Running AI models without matrix math means far less power consumption—and fewer GPUs?

  • bitfucker@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 months ago

    Good

    Edit: Oh shit nvm. It still requires dedicated HW (FPGA). This is no different than say, an NPU. But to be fair, they also said the researcher tested the model on traditional GPU too and reduce memory consumption.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Only for maximum efficiency. LLMs already run tolerably well on normal CPUs and this technique would make it much more efficient there as well.