• RoquetteQueen@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    I don’t know what I’m doing differently but I really haven’t experienced the things everyone complains about. It’s been fine overall. Not a ton to see but that just means I don’t waste so much time.

  • PrettyFlyForAFatGuy@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    1 year ago

    i’ve said it before and i’ll say it again. give me a spec and i’ll (try to) write you a tool.

    i’m a competent coder, but i have no idea kind of what mod tools are needed.

    • 7heo@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      For starters, a way to unban people would be nice. Then, also, a way to easily see new content for their community. Like, only new content. And not see it after it has been marked as “reviewed” (except as context to unreviewed content, when unfolded). I mean, new posts, new comments, etc. With alerts. Also, sudden activity alert.

      A way to match keywords, and bring up matching posts and comments.

      Metrics about each user’s contributions to the community, are they new, or seasoned. Did they contribute mostly popular content or unpopular content? What words do they use most? Etc.

      Compiling multiple reports for a single post/comment into one. Ignoring reports from select users.

      That’s all I can think of for now.

      But, essentially, a dashboard with live content, showing “old” content as “greyed out”, and relevant actions, would be really, really useful.

      Edit: additionally, automated actions would be great. Answering posts/comments matching regexes with templates populated with the user’s information; automatically removing, issuing warnings, and banning (outright or after n warnings) people for specific terms, etc.

      It would also really help to have automation workflows (e.g. user commented with “r-word” or “n-word”, autocomment a warning, wait X minutes/hours, or Y minutes/hours after user comments again, remove comment/ban).

      This automation could come as an additional tool, to be ran under a separate account.

    • Terevos@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago
      1. Report queue. Right now, reports go to a queue that both instance owners and mods use. This makes it impossible to mod because the instance owners mark items as completed before mods even had a chance to look at them.

      Now, if it’s the case where it’s user abuse it’s fine for the instance owner to take care of it.

      But if it’s just breaking the rule of a community, the instance owner should never even see it.

      Separating the queues would help both mods and instance owners.

      1. The ability to hide a community from All and/or Local. Some communities just aren’t appealing to the general public. And when All surfers see posts, they just downvote them into oblivion.
      • PrettyFlyForAFatGuy@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        these could be a little more difficult. they seem to be instance level features.

        i might be able to do a tool for the first one using filters if there is a way to insert keywords into a report e.g. “To Mods” or “To Admins”

  • stanka@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    I escaped ads and a dictatorship only to come here and be told how great communism is with an even greater frequency.

    Blocking hexbear communities just led to those users going to other instances and making the whack-a-mole more difficult.

    • TomJoad@lemmy.tf
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      The ultimate method is:

      Cultivate your own ‘Subscribed’ feed.

      Then almost every post is good.

      You choose your own level of involvement.

      • Grammaton Cleric@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Just because I’m interested in the category doesn’t mean every post will be good. Classification doesn’t guarantee quality.

    • Blaze@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      Yeah my local is just as trash as the all though?

      I honestly mostly stick to subscribed.

      Once a day I check all with “Top of the day”

      For emerging communities I use !trendingcommunities@feddit.nl (which just moved today to !trendingcommunities@endlesstalk.org

      I’m just about ready to give up.

      Don’t force yourself if you don’t feel like it. Lemmy still have a lot of rough edges, hopefully it will get better over time, but at the moment it takes some commitment to use it as a Reddit replacement

    • Teppic@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Have you tried kbin? Same content in that it’s Lemmy compatible, but slightly different sorting algorithm which (in my view) seems to result in a more rounded/balanced set of posts being promoted.
      Yes there a different set of issues - it’s earlier in it’s development phase, but developing fast (collapsibling comments is being worked on, API (and therefore 3rd party apps) is imminent, many other improvements are developed and expecting to go live this month…

        • Teppic@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Circa 60,000 active users, but whatever…

          You are rather missing my point. Because it sorts on boosts rather than upvotes it surfaces different things in the federated ‘all’ feed.

          Edit: As corrected below it’s about 10k monthly active users, but that’s still circa 10% of the whole threadiverse (kbin + Lemmy) and only Lemmy.world is larger than kbin.social

        • Excel@lemmy.megumin.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          It’s federated, so the local user count is completely irrelevant.

          Especially when OP even specifically said that you would see the SAME content, just with different sorting.

  • bahmanm@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Interesting topic - I’ve seen it surface up a few times recently.

    I’ve never been a mod anywhere so I can’t accurately think what workflows/tools a mod needs to be satisfied w/ their, well, mod’ing.

    For the sake of my education at least, can you elaborate what do you consider decent moderation tools/workflows? What gaps do you see between that and Lemmy?

    PS: I genuinely want to understand this topic better but your post doesn’t provide any details. 😅

      • bahmanm@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I see.

        So what do you think would help w/ this particular challenge? What kinds of tools/facilities would help counter that?


        Off the top of my head, do you think

        • The sign up process should be more rigorous?
        • The first couple of posts/comments by new users should be verified by the mods?
        • Mods should be notified of posts/comments w/ poor score?

        cc @PrettyFlyForAFatGuy@lemmy.ml

        • PrettyFlyForAFatGuy@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I can think of some things i could implement on the lemmy server side that could help with this, i’m pretty sure that the IWF maintains a list of file hashes for CSAM and there are probably a few other hash sources you could draw from too.

          so the process would be something like the following

          • create a local db for and periodically (like once a day) update CSAM hash list
          • I would be very surprised if hashes for uploads are not already created, compare this hash with list of known harmful material
          • if match found, reject upload and automatically permaban user, then if feasible automatically report as much information as possible about user to law enforcement

          so for known CSAM you don’t have to subject mods or user to it before it gets pulled.

          for new/edited media with unrecognised hashes that does contain CSAM then a mod/admin would have to review and flag it at which point the same permaban for the user, then law enforcement report could be triggered automatically.

          The federation aspect could be trickier though. which is why this would probably be better to be an embedded lemmy feature rather than a third party add on.

          I’m guessing it would be possible to create an automoderator that does all this on the community level and only approves the post to go live once it has passed checks.

          • bahmanm@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            That sounds a great starting point!

            🗣Thinking out loud here…

            Say, if a crate implements the AutomatedContentFlagger interface it would show up on the admin page as an “Automated Filter” and the admin could dis/enable it on demand. That way we can have more filters than CSAM using the same interface.

  • clearedtoland@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Home feed for the win. Granted I should probably migrate to another instance (again). I can only block so many anime and cat communities…

  • olizet@lemmy.works
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I have built my own instance and federate with any instance that has interesting communities. No problems here.