Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

  • Atemu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    The instruction support for ARM built into llama.cpp is weak compared to x86.

    I don’t know about you but my M1 Pro is a hellovalot faster than my 5800x in llama.cpp.

    These CPUs benchmark similarly across a wide range of other tasks.

      • Atemu@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        No consumer AMD hardware is on that list.

        *No consumer Intel hardware is on that list.

        The only widely available consumer hardware with AVX512 support is AMD’s Zen4 (7000 series).

        I think just about the only Apple computer that supports AVX512 is the 2019 mac pro.