• iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    2 days ago

    Most commercial models have that, sadly. At training time they’re presented with both positive and negative responses to prompts.

    If you have access to the trained model weights and biases, it’s possible to undo through a method called abliteration (1)

    The silver lining is that a it makes explicit what different societies want to censor.

      • SkyeStarfall
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        In fact, there are already abliterated models of deepseek out there. I got a distilled version of one running on my local machine, and it talks about tiananmen square just fine

    • Snot Flickerman
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      Hi I noticed you added a footnote. Did you know that footnotes are actually able to be used like this?[1]

      Code for it looks like this :able to be used like this?[^1]

      [^1]: Here's my footnote


      1. Here’s my footnote ↩︎

        • Snot Flickerman
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          I actually mostly interact with Lemmy via a web interface on the desktop, so I’m unfamiliar with how much support for the more obscure tagging options there is in each app.

          It’s rendered in a special way on the web, at least.

            • Snot Flickerman
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              markdown syntax

              yeah I always forget the actual name of it I just memorized some of them early on in using Lemmy.