so it’s GANAM now (from GAFAM or GAMAM)

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    LLMs are a bubble.

    But the uses of massively parallel math are still in their infancy. Scientific compute, machine learning, all kind of different simulations. Nvidia has been setting themselves up for all of it with cuda for years. At least until we get better options to physically replicate neurons (primarily how interconnected they are in a brain), GPUs and cuda specifically are how most AI is going to happen. And as the power increases, the ability to do increasing complex physics simulations of increasingly complex phenomena is going to become more and more relevant. Right now, it’s stuff like proton folding, fluid dynamics, whatever. But there’s way more coming. And all of it is going to use GPUs.