I have an idea. I can’t tell if it’s good or bad. Let me know what you guys think.

I think when someone posts “clone credit cards HMU for my telegram I know you’re just here sitting here waiting like gee I wish someone would post me criminal scammy get rich quick schemes, I can’t want to have a felony on my record” type spam, there should be a bot the mods can activate that will start sending messages to the person’s telegram or whatever, pretending to be interested in cloned credit cards.

It wouldn’t be that hard to make one that would send a little “probe” message to make sure it was a for-real scammer, and then if they respond positively, then absolutely flood them with thousands of interested responses. Make it more or less impossible for them to sort the genuine responses from the counter-spam, waste their time, make it not worth their while to come and fuck up our community. And if they lose their temper it can save some of the messages and post them to some sort of wall of victory.

What do people think?

  • haui@lemmy.giftedmc.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I just immediately thought the same. No way would they be able to distinguish that from a real person.

    • 0x4E4F@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.

      • CrayonRosary@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        You can preface a ChatGPT session with instructions on what length and verbosity you want as replies. Tell it to roleplay or speak in short text message like replies. Or hell, speak in haikus. It’s pretty clever for an LLM.

        And if someone’s writing code to make a bot, they can privately coach the LLM before they start forwarding any replies between the real person.

          • Deebster@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            9 months ago

            No, you don’t need to train it, it’s just about the prompt you feed it. You can (and should) add quite a lot of instructions and context to your questions (prompts) to get the best out of it.

            “Prompt engineer” is a job/skill for this reason.

      • poweruser@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol

      • kakes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Nah, not really! I’ve chatted with people using ChatGPT, and most couldn’t tell. It’s pretty slick, blends in well with natural conversation.

        • 0x4E4F@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 months ago

          Most… you’re talking about the average Joe. People that write spam bots are not your average Joe.

          Plus, if you’re talking about a chat with multiple people, yes, it might stay under the radar. But 1 on 1, probably not.

          • kakes@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Well, fair point about the spam bot creators, but in my experience, even in one-on-one chats, it holds up. I’ve had some pretty smooth conversations without anyone suspecting it’s AI.

              • kakes@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                This conversation is a small example. My previous messages in this comment chain were generated by ChatGPT.

                I’m too lazy to keep that up indefinitely, but at this point you can decide for yourself whether it was convincing enough.