For anyone who knows.

Basically, it seems to me like the technology in mobile GPUs is crazier than desktop/laptop GPUs. Desktop GPUs obviously can do things better graphically, but not by enough that it seems to need to be 100x bigger than a mobile GPU. And top end mobile GPUs actually perform quite admirably when it comes to graphics and power.

So, considering that, why are desktop GPUs so huge and power hungry in comparison to mobile GPUs?

  • gramathy@lemmy.ml
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    9 months ago

    Also the laptop gpus tend to have less or “worse” memory for a variety of reasons (lower resolution screens means less need for VRAM or processing powe, lower power GDDR, lower RAM clocks, etc. That 85% number works in more than just straight rendering throughput

    • Dudewitbow@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      i wouldnt necessarily say that, there are times oems double ram capacity compared to their typical value on laptop, its just less common today than it used to because nvidia tax.

      take for example back over a decade ago with maxwell, desktop 750tis on desktop were usually 2gb vram cards, even 1gb. on mobile, 860m/960m(the laptop equivalent) often had 4 gb vram varients. Laptop ram though will be clocked more conservatively.

      • d3Xt3r@lemmy.nz
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Also, AMD APUs use your main RAM, and some systems even allow you to change the allocation - so you could allocate say 16GB for VRAM, if you’ve got 32GB RAM. There are also tools which allow you can run to change the allocation, in case your BIOS does have the option.

        This means you can run even LLMs that require a large amount of VRAM, which is crazy if you think about it.

        • Blaster M@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 months ago

          Problem is, system RAM does not have anywhere near the bandwidth that dedicated VRAM does. You can run an AI model, but the performance will be 10x worse due to the bandwidth limits.