As this #RedditBlackout accelerates the Fediverse experiment, I feel the urge… the need… to chime in with my 2-cents.

My summary of the current lay of the land: Beehaw saw a wave of pornography spam and decided to shut Lemmy.world off and Defederate from this server. I’m too new to this community to fully understand the wants/needs of each individual server, but I’ve been around the internet long enough to recognize that porn-spam is an age-old trolling technique and will occur again in the future. Especially as small, boutique, hobbyist servers pop up and online drama/rivalries increase, online harassment campaigns (like coordinated porn spam attacks) are simply an inevitability.

Lemmy.world wants open registrations. Beehaw does not: Beehaw wants users to be verified before posting. This is normal: many old /r/subreddits would simply shadowban all 1-year old accounts and earlier… giving the illusion that everything is well for 5+ or 10+ year old accounts, but cut out on the vast majority of spam accounts with short lives. This works for Reddit where you have a huge number of long-lived accounts, but its still not a perfect technique: you can pay poor people in 3rd world countries to create accounts, post on them for a year, and the these now verified accounts can be paid for by spammers to invade various subreddits.

I digress. My main point is that many subreddits, and now Lemmy-instances/communities, want a “trusted user”. Akin to the 1±year-old account on Reddit. Its not a perfect solution by any means, but accounts that have some “weight” to them, that have passed even a crude time-based selection process, are far easier to manage for small moderation teams.

We don’t have the benefit of time however, so how do we quickly build trust on the Fediverse? It seems impossible to solve this problem on lemmy.world and Beehaw.org alone. At least, not with our current toolset.

A 3rd Server appears: ImNotAnAsshole.net

But lets add the 3rd server, which I’ll hypothetically name “ImNotAnAsshole.net”, or INAA.net for short.

INAA.net would be an instance that focuses on building a userbase that follows a large set of different instances recruiting needs. This has the following benefits.

  1. Decentralization – Beehaw.org is famously only run by 4 administrators on their spare time. They cannot verify hundreds of thousands of new users who appear due to #RedditBlackout. INAA.net would allow another team to focus on the verification problem.

  2. Access to both lemmy.world and Beehaw.org with one login – As long as INAA.net remains in the good graces of other servers (aka: assuming their user filtering model works), any user who registers on INAA.net will be able to access both lemmy.world and Beehaw.org with one login.

  3. Custom Moderation tools – INAA.net could add additional features independently of the core github.com/LemmyNet programming team and experiment. It is their own instance afterall.

Because of #2, users would be encouraged to join INAA.net, especially if they want access to Beehaw.org. Lemmy.world can remain how it is, low-moderation / less curated users and communities (which is a more appropriate staging grounds for #RedditBlackout refugees). Beehaw.org works with the INAA.net team on the proper rules for INAA.net to federate with Beehaw.org and everyone’s happy.

Or is it? I am new to the Fediverse and have missed out on Mastodon.social drama. Hopefully older members of this community can chime in with where my logic has gone awry.

  • Sens@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It’s a great idea, but how do you propose verification on the INAA.net site? Using their current instance account details and seeing how many upvotes their comments and posts have received or something? Essentially developing a Karma tracking system that’s seperate to Lemmy.

    Like you say a staging post, but then account elevation which allows it on more sensitive instances? I don’t know how we could guide new users to these staging instances though, unless every popular instance, where most content is created, draws up the bridges together and makes them read-only to the staging instances.

    If beehaw didn’t want to get involved, it’s up to them, but I can see other instances who would want to use the service. Tbh the beehaw admins do sound like they a tad powermad so screw them anyway.

    Lemmy needs a governing body, based on a democratic election system, to handle all this in all in sync between all instance admins. That body would also be in control of the master blocklists.

    • dragontamer@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I don’t know how we could guide new users to these staging instances though

      Read-only access to Beehaw.org, but with a message “Beehaw.org has a user-treaty with INAA.net. Only users older than 1-week can post to Beehaw.org instances”

      I admit that these features don’t exist yet. But why not? Lets first come up with the idea and try to figure out what is easiest to code.

      • Sens@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If it was natively part of the Lemmy project, the problem is I don’t know rust. Python yeah but never even attempted rust myself.

        We need buy in from all admins of the most popular instances with a central authority in place managing it all. It can be done, I’d put my name down to assist but it’s a learning curve for me but sounds really fun and something to add to my CV

    • drphungky@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I mentioned this in a different comment, but you’d need to do legit identity verification. “Send me a picture of your face and 4 fingers” since AI can’t do fingers well, or a blanked out picture of your driver’s license. It would be extremely mod intensive to get set up, but it would entirely prevent bots. Sure you’d still get assholes, but banning someone whose actual identity is tied to their account is way more damaging and would get you slightly better behavior.

      • Sens@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Being honest, I don’t like the idea of giving personal info including my face to an unknown 3rd party which are not really bounded by any laws or regulations since they have no audit trial and not a registered corporation.

        Building a time-based karma tracking system seems much better to me and could be automated.

        1. user puts in their account name and which instance they are on
        2. the automated bot checks scrapes all their comments and posts , then calculates a sum of total karma. On the backend we code in a threshold of karma that needs to be attained before they become a verified user.
        3. the bot also checks how long the account has been live for, as an example let’s say we code in a threshold of 2 weeks
        4. bot adds user to the whitelisted user database

        It’s these rules that should be defined by a central authority which has buy in from the Lemmyverse as a whole.

        In other comments I made, I mentioned I like the fact that karma doesn’t exist here. But iv changed my mind, I can now see its uses. To be honest a system like this would give us the best of both worlds. Would still have an effect on stopping karma wh-or-ing but also allow that extra layer of security.

        • drphungky@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I totally agree, and I’m not sure I would do it either if I weren’t proposing it, but I think late-stage Reddit showed us that time-based accounts don’t really work. Bots, sold accounts, state-based agitprop and astroturfed corporate ads were rampant, especially compared to a decade ago.

          All of that is fine for the servers that want that, but some kind of “guaranteed” userbase at least lets other communities limit themselves if they want to. I wouldn’t suggest Lemmy as a whole use it, but it certainly seems like it could be a good addition.

          • Sens@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’ll try to knock something up in python over next couple weeks and will start a GitHub project to share with you. We would also need to think about database costs, I can’t see it being a massive database though since it’s just a list of usernames.

            We would then need to add in a module to the Lemmy project, (which I can’t do as I don’t know rush) so admins have the ability to use our whitelists.

            • drphungky@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              My Python (and all programming for that matter) is pretty basic, but I am at least on Github and can look at it.

              I’m still not sold on time-based though and would push for some other kind of stricter verification. If every community starts banning any big community but a list of people “guaranteed to not be assholes” is always allowed, that would make a lot of people sign up to be based in that community - even if the signup is onerous or invasive. But the nice thing is it doesn’t require anyone to do so - it just becomes a good first choice if as a user you want to be guaranteed to not be banned from some section of the fediverse. And for the other communities, it becomes a really easy list to allow even if they ban everyone who didn’t create an account on their server. It’s all about incentives.

              • Sens@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Sorry, just so I’m clear, I’m proposing karma and time based rules together in one. So they have to pass both before becoming verified. It’s a start anyway, I’m getting to work right now to see what can be done

                • drphungky@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Sorry, not trying to be a debbie downer, just trying to think things through: What stops bot armies from upvoting new accounts and giving false karma? I’m trying to think of scalability.

                  • Sens@feddit.uk
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    1 year ago

                    It doesn’t, but reddit has that issue too. It’s a balancing act overall between anonymity and safety.

                    That said, because we are so early in Lemmys life, the thresholds would have to be set low otherwise no one would pass, and that does present an issue yes.

                    Maybe someone else will come along and help suggest something more here.

      • CoderKat@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        So this is against the point, but I do wanna point out that AI can do fingers these days. Plus you can just try again if it makes a mistake. Finally, Stable Diffusion with Control Net lets you basically fit a new image to some existing shape with fine grain control. You just give it an outline of what the picture should look like and it’ll fill everything in. That is effective at dealing with tricky poses. You can even just take a picture and pass it in.

        Similarly, photoshopping stuff like a fake driver’s license has always been easy. It’s not like server admins can verify license barcodes and security features are often crippled in image format.