• ormr@feddit.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 months ago

    Humans have a history of violence as well as altruism. And with an increasing degree of societal complexity, humans also have a consistent record of violence reduction. See e.g. “The better angels of our nature” (Pinker, 2011).

    Painting humans as intrinsically violent is not backed by evidence.

    • RedstoneValley@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 months ago

      Ok, maybe it helps to be more specific. We have an LLM which is based on a broad range of human data input, like news, internet chatter, stories but also books of all kinds including those about philosophy, diplomacy, altruism etc. But if the topic at hand is “conflict resolution” the overwhelming data will be about violent solutions. It’s true that humans have developed means for peaceful conflict resolution. But at the same time they also have a natural tendency to focus on “bad news” so there is much more data available on the shitty things that happen in the world which is then fed to the chatbot.

      To fix this, you would have to train an LLM specifically to have a bias towards educational resources and a moral code based on established principles.

      But current implementations (like ChatGPT) don’t work that way. Quite the opposite, in fact: In training, first we ingest all the data that we can get our hands on (including all the atrocities in the world) and then in a second step we fine-tune the LLM to make it “better”.