• jballs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    57
    ·
    5 months ago

    Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005)

    Now I kinda want to read On Bullshit