• Car@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    22
    ·
    9 months ago

    Interesting. There was a study put out some time ago that had 40 or so game theorists develop algorithms to compete against each other. The most successful algorithm cooperated with the opponent until they defected, at which point they would defect the next round.

    They never performed a first strike. Only one retaliation strike for each attack their opponent performed. After the retaliation, it was back to cooperating with no long term ill will.

    • Ech@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      I think I saw something about it that. It was an extended prisoner’s dilemma game, right? I wouldn’t say that’s directly applicable to every gaming genre.

      • Car@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        Without being in the room, we can only go off what the article lays out. These are wargaming scenarios though, so escalation is a very real concern. If both sides are running these models to provide recommendations and both are pushing for greater conflict, you find yourself in a prisoner’s dilemma real quick.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.

        • Ech@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          The models used by the writers of the article and those used by the military are going to be radically different.

          • Car@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            The writers of the article are reporting on use of these models by the military. They aren’t using the models. If I remember right they called out some models developed by one of the defense contractors like palantir

            • Ech@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 months ago

              The researchers tested LLMs such as OpenAI’s GPT-3.5 and GPT-4, Anthropic’s Claude 2 and Meta’s Llama 2

              All these AIs are supported by Palantir’s commercial AI platform – though not necessarily part of Palantir’s US military partnership

              Also, they’re reporting on a Stanford study of how these platforms could be used militaristically, not the military’s actual use of them.

              • Car@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                9 months ago

                You’re right. I was focused on this part above. I made like an AI and jumped the gun

                These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts, enlisting the expertise of companies such as Palantir and Scale AI. Palantir declined to comment and Scale AI did not respond to requests for comment.