I’ve started to realize that every social media platform, including Facebook, Telegram, Twitter, etc., has issues with bot spam and fake follower accounts. These platforms typically combat this problem by implementing various measures such as ban waves, behavior detection, and more.

What strategies/tools did Lemmy employ to address bots, and what additional measures could further improve these efforts?

  • Emily
    link
    112 months ago

    As a moderator of a couple communities, some basic/copypasta misbehaviour is caught by automated bots that I largely had to bootstrap or heavily modify myself. Near everything else has to be manually reviewed, which obviously isn’t particularly sustainable in the long term.

    Improving the situation is a complex issue, since these kinds of tools often require a level of secrecy incompatible with FOSS principles to work effectively. If you publicly publish your model/algorithm for detecting spam, spammers will simply craft their content to avoid it by testing against it. This problem extends to accessing third party tools, such as specialised tools Microsoft and Google provide for identifying and reporting CSAM content to authorities. They are generally unwilling to provision their service to small actors, IMO in an attempt to stop producers themselves testing and manipulating their content to subvert the tool.