Want to stop chatGPT from crawling your website? Just mention Australian mayor Brian Hood (or any of the other names listed in the article)

When asked about these names, ChatGPT responds with “I’m unable to produce a response” or “There was an error generating a response” before terminating the chat session, according to Ars’ testing. The names do not affect outputs using OpenAI’s API systems or in the OpenAI Playground (a special site for developer testing).

The filter also means that it’s likely that ChatGPT won’t be able to answer questions about this article when browsing the web, such as through ChatGPT with Search. Someone could use that to potentially prevent ChatGPT from browsing and processing a website on purpose if they added a forbidden name to the site’s text.

    • TheFriar@lemm.eeM
      link
      fedilink
      arrow-up
      17
      ·
      1 month ago

      I think your typo helped it get past the filter, not the leetspeak. It said it didn’t know, and hen when you said “look it up,” the search results autocorrected and that’s how you got past the filter.

      • Boomkop3@reddthat.com
        link
        fedilink
        arrow-up
        6
        ·
        1 month ago

        It’s still a text predictor. Your average corporate representative isn’t going to use leetspeak, and so the probable next set of words won’t be as similar.

      • TheFriar@lemm.eeM
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Maybe it was a way to get the engine to say it doesn’t know “brain” hood, and when they asked it to look it up, their hits autocorrected to “Brian,” and that’s how they got the information past the filter. Which would be incredibly clever, and it’s I believe how it actually got past it, not the leetspeak.

        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          8
          ·
          1 month ago

          Nop, it’s the leetspeak. That trick has worked great for me, I don’t know why chatgpt hasn’t patched it yet. Google figured that out back when their servers held 40gb and were build out of legos

      • Boomkop3@reddthat.com
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Whoops, typo. It blocked the non-leetspeak variant I typed either way. And was able to find info about it to.

  • brrt@sh.itjust.works
    link
    fedilink
    arrow-up
    30
    ·
    1 month ago

    Interesting. Do you remember when people posted some no consent message in their social media posts like on Facebook or even now on Lemmy? Those messages did nothing. But now you just need to add one of the names from this list to your post and it will actually work? Quite fascinating.

    (Brian Hood)

  • kryptonidas@lemmings.world
    link
    fedilink
    arrow-up
    28
    ·
    1 month ago

    I think there are two crawlers and the one on the data collection stage to build the model will still crawl away even if you have certain content on your page.

    The one that searches when you ask a question is a different one.

      • kryptonidas@lemmings.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 month ago

        More recent versions can search the internet. Then it basically adds the words of the page to the prompt.

        Edit: Might have misunderstood, to make it crash it doesn’t have to search. That data is already internal.

        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          3
          ·
          1 month ago

          I don’t think this is a crash. This looks like a filter on openAI’S end now that I’ve played with it myself

  • Xenny@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    1 month ago

    Neat. Now we can just put a Brian “Hood” over everything we want AI to ignore

  • Ookami38@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 month ago

    I figured I’d try what they said about having gpt read that article. Pretty funny.

    Huh. I went back to the chat several hours later, and it appears to have finished the reply:

    It still breaks if it tries to generate another response containing any of the names though.

  • nimble
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 month ago

    The filter also means that it’s likely that ChatGPT won’t be able to answer questions about this article when browsing the web, such as through ChatGPT with Search. Someone could use that to potentially prevent ChatGPT from browsing and processing a website on purpose if they added a forbidden name to the site’s text.

    Oh no! Someone will have to, checks notes, do their own research and read? The horror!