The logical end of the ‘Solution to bad speech is better speech’ has arrived in the age of state-sponsored social media propaganda bots versus AI-driven bots arguing back

  • xkforce@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 年前

    OpenAI is so concerned that AI will do x and y bad thing but still pour all these resources into developing it further.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 年前

      There are other endeavors where a great deal of the effort is put into making it safe. Space travel for example.

      I wish that was the case for AI development. AI safety is a notoriously underfunded, understaffed and still overall neglected field.

      • xkforce@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 年前

        If someone knows that what they’re about to create is going to do harm like this, they shoulder some of the responsibility for those consequences. They dont just get to wash their hands of it as if they had no idea.

          • xkforce@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 年前

            Because people should consider the pros and cons of what they work on not just pretend that none of the responsibility for those cons is theirs. AI is one of the things that could wipe out humanity. Not in the terminator sense but through unparalleled distruption of the economy and by facilitating a wedge between people through the production of propaganda like none that weve ever seen. i.e deepfakes, personally tailored propaganda etc.

      • etuomaala@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 年前

        In that case, would you object to the posting of detailed schematics on the internet for the creation of nuclear weapons?

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 年前

      The incentives to continue development are far too great; if one firm abandons the project, that just means that AI will be developed by a less ethical firm. This is why arguing that AI is bad in-and-of-itself is a moderately effective way to reduce the ethics of the average AI developer.