It’s time to call a spade a spade. ChatGPT isn’t just hallucinating. It’s a bullshit machine.

From TFA (thanks @mxtiffanyleigh for sharing):

"Bullshit is ‘any utterance produced where a speaker has indifference towards the truth of the utterance’. That explanation, in turn, is divided into two “species”: hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.

“ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users’) agenda.”

https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

@technology #technology #chatGPT #LLM #LargeLanguageModels

  • @uriel238
    link
    1617 days ago

    GPT 4 can lie to reach a goal or serve an agenda.

    I doubt most of its hallucinated outputs are deliberate, but it can choose to use deception as a logical step.

    • @jsomae@lemmy.ml
      link
      fedilink
      5
      edit-2
      16 days ago

      Ehh, I mean, it’s not really surprising it knows how to lie and will do so when asked to lie to someone as in this example (it was prompted not to reveal that it is a robot). It can see lies in its training data, after all. This is no more surprising than “GPT can write code.”

      I don’t think GPT4 is skynet material. But maybe GPT7 will be, with the right direction. Slim possibility but it’s a real concern.