The ability to block a user and all of their followers is useful in helping to prevent the worst aspects of a harassment mob. Twitter has a third party browser add on that allows you to do this called twitter block chain. Mastodon should have this feature native to it. Harassment mobs often have leaders or key people who incite said mob and disrupting their ability to organize mass harassment is a useful tool.
This would be extremely useful.
My instant reaction is that I see a hell of a lot of potential for friendly fire and would want to have enough "are you sure" warnings to be kind of annoying.
If I was going to flip this switch, I would want to see lists of places my social graph intersects with the people I'm cutting off:
...and be able to make exceptions if there were people I would prefer to talk to about their following choices before completely cutting them off.
Because names can be triggering, ideally these lists would be in closed-by-default accordions. Something like this:
You are about to block Buttchungler (@[email protected]) and all their followers. Are you sure?
3 people who you follow, and who follow you back, will be affected. [Show Less]
Name Block?
@[email protected] [ ]
@[email protected] [x]
@[email protected] [x]
6 people you follow who don't follow you back will be affected. [Show More]
47 people who follow you and @Poopsausage will be affected. [Show More]
85 people who follow @Poopsausage will be affected. [Show More]
[Maybe not] [Block!]
You could have it exempt mutual follows or something, id be ok with this.
This seems like a terrible idea. "Guilty by association" especially when the association is something as inconsequential as social network follows is bound to be mostly false positives. Reminds me of the clusterfuck that was ggautoblocker on Twitter.
Why not go a step further and have each block triggered by the chain block itself trigger a chain block? You can block an entire Follow Graph in one go and be done with it via a simple DFS procedure!
If we don't want quote tooting or other toxic Twitter happenings, why would you encode one of the worst tools built on top of the Twitter as a built-in feature of the platform?
I literally am experiencing a harassment wave right now and they all follow one particular bad actor or two. I literally live this experience. I know what works and what doesnt. Sure its an extreme measure, but its a tool for extreme situations.
This is a worthwhile feature, and shouldn't be confused with a tool to export block lists (which can be used for good or ill).
I don't think a user choosing this option needs to be excessively warned that their command will be executed, assuming it is labeled correctly.
If it does what it says on the tin, let the user decide. Maybe a confirmation box with a toggle to "[x] preserve existing relationships, if any."
A lot of people dont understand that victims of harassment know these tools might have false positives and that we are willing to accept that to stop the even worse situation of having large clusters of people to block manually. Which is mentally exhausting, harmful and time consuming
Don't speak for all victims of harassment. A lot of us have been victims of these kinds of tools wielded by powerful people with lots of followers to silence and isolate us and our support networks.
Ive been a victim of being on blocklists too, this is not that. This is not a tool to share blocks with people, this is an individual tool for one person to block many. This is literally not the same thing or even close.
You understand that powerful people still share who they're blocking and why, right? And that their followers consume that message and act on it, right?
Or, maybe you don't and you're not thinking through the threat model of this feature carefully, which seems to be the case.
If by that you mean manually exporting a list of people they have blocked and then someone else manually importing that list its possible someone could do so, but its not automatically shared among many and ive heard of no instances of anyone actually doing this and I for one would not share an exported list of people ive blocked with anyone for any reason.
Im at a bit of a loss here because any time someone proposes any serious anti harassment tools they get shouted down while the people who do the shouting down have no ideas themselves for tools they would accept.
So please, what alternatives do you have for blocking large social clusters of people who engage in harassment but are not all on the same instance?
I apologise that I don't have a magic wand to wave for you, but that doesn't, nor should it, disqualify me from speaking out against a dangerous request.
So you have zero solutions. Perfect is the enemy of good.
Something must be done. This is something. Therefore this must be done.
The perfect is only the enemy of the _good_. This is not the good.
I've been a "Victim of Harassment" (including stalking that bled into real life) and this is still a stupid idea.
Implementation thought: how much hassle is it to get the list of @[email protected]'s followers if @[email protected] or butts.town is suspended by your instance?
Could people maybe respond to the points Laurelai has actually raised? Blockchain tools are proven useful on Twitter, and the same social mechanics are at play here. Blockchain tools are not BlockTogether lists. Bad actors will abuse any tool (though in this case, the abuse avenue appears to not be clearly explained by those in this thread), but targets of abuse still need tools that actually work to stem abusive activities.
Thumbs down on, explaining what you mean? How is that supposed to help?
I'm working on it, please give me a minute to finish writing up my comment. I'm trying to be as detailed as I can within reason.
Cool, thanks for thumbs-down'ing my comment for no reason then.
Look, I've already responded to those points raised. To go back:
I for one would not share an exported list of people ive blocked with anyone for any reason.
You don't have to share an exported list, you have to share the biggest fish in a small pond of people you want to isolate. If I have thousands of followers and say "X is harassing me" and everybody following me chain blocks X and their followers, even not accounting for the boosting effect, whoever becomes Mastodon's wilw can still harm a lot of people really casually, and a sufficiently popular bad actor can do even worse.
Federation comes with a lot of shit, and yes, making this problem more difficult to solve is one of them, but this is _easy to misappropriate as a weapon_. This is the "fake news" of Mastodon, something that looks like a good tool and can, with much less fanfare than blocktogether lists, be used against people.
On Twitter, most big actors have a single identity that you have to manage expectations against, so if you have a bad actor you know them everywhere, but _federation breaks that immediately_. This would hurt people badly, and I don't have to have a solution at the ready to see that or comment on it.
What happened with Wil (why does it always come back to this guy, why) was that he had an established blocktogether list with lots of subscribers, and started adding people to it that shouldn't have been added. He abused his social position to cause harm to massive amounts of people.
This proposal is literally the opposite of that: Giving people who are the targets of abuse the ability to stem that abuse at the source.
Like do you have any idea how many people post on the fedverse telling others to block me? A lot. I know people could and will use it on me too. I dont care. Its better than me having to block hundreds of people by hand from many instances.
It looks like Laurelai is experiencing the haters here too considering the amount of dislikes on things. Going and thumbs downing all of her posts. It shows you we really need this feature. I should have to be forced to interact with Buttnugget's followers knowing they'll dogpile me after I block him.
We have a great test bed, though! Twitter already has a BlockChain tool. You should have no problem finding examples of bad actors using the BlockChain tool to cause massive harm to marginalized people, rather than, you know, the exact opposite of that.
If you're not going to read what I've raised about how easy it is for somebody to inspire their followers and their followers' followers to effect the same damn thing as a BT list then I don't see the point of this discussion.
I have no problem finding examples on Twitter because it _literally happened to me_ but given how eager y'all are to talk over me I can definitely see telling you who I am on Twitter going over _very well_.
I look forward to seeing who forks Mastodon if this request actually goes through to see who prioritizes what.
Also super glad to see that "being concerned about my own safety and community" is now "hating" I thumbs down posts that have bad-faith arguments in them.
The mechanics of this are different than a BlockTogether list, for some just basic reasons. I see the avenue you're discussing, and agree that's a potential risk factor. I don't agree it's a significant-enough risk to not have this feature available, though.
BlockTogether lists work quietly once you subscribe to one, by design. Like the TerfBlocker list getting taken over by Terfs, thus causing lots of trans women to suddenly be subscribing to a "bad actor" list and causing disruption to their desired social networks.
A single bad actor (even a very popular one) saying "Hey my followers: blockchain Nancy" is not the same scope.
(also the joke is of course this proposal won't be upstreamed into tootsuite because it would actually be effective at stopping abusive actors)
For some additional context: That accusation comes directly from the people who organize abuse campaigns using disinformation.
Also, good job joining GitHub expressly to help throw out those baseless accusations.
You would think she would want the feature too to block me and all of my followers as well as thats what would be best for all parties involved.
It would be nice if this comment thread could focus on the feature request.
A request to let a user mass block people efficiently.
If you look at the readme for the ggautoblocker (that's the algorithmic one, not the curated one wilw infamously used), it points out that Twitter's blocking tools were effective against one user, but "useless against a large number of accounts targeting a single user."
https://github.com/freebsdgirl/ggautoblocker/blob/master/README.md
Mastodon has a similar shortcoming. When the harassment is distributed, but being led by a small number of ringleaders, the existing tools are insufficient.
The only thing that distinguishes the requested feature from the ggautoblocker is that ggautoblocker tested for followers of >1 known bad actors, to reduce the number of false positives.
Maybe there's an argument that it's more cautious to do that sort of cross-check, but we're not talking about a third-party tool that people subscribe to, just an action a user might choose to perform for themselves when they are targeted.
I don't think the same safeguards are needed or appropriate鈥攁ssuming that users need to be protected from their own blocking decisions is paternalistic. If a user wants to block 99% of the fediverse is that anyone's business but their own?
Look, the attack vector being less automatic makes it less severe, but it doesn't make it insignificant.
If a user wants to block 99% of the fediverse is that anyone's business but their own?
Why doesn't this argument apply to autoblocker or blocktogether type tools?
I'm more sympathetic after reading the whole thread than I was when I first chimed in but I still have concerns.
Because autoblocker or blocktogether are one person deciding the blocks for many people instead of one person deciding their blocks
If people want to cede their blocking decisions to someone else, why is that anyone's business but their own?
That's completely unrelated to the discussion?
It apparently isn't because people keep positioning this either alongside or in opposition to these kinds of tools.
This proposal is about User A making a decision to mass block users, not about User A subscribing to User B's block list.
That is why it's not related. Just because something is mentioned in a discussion doesn't make it related to the proposal.
For some context on this request, Laurelai Bailey is a known rapist and abuser, with several public accusations against her
she would now like to evade any hint of responsibility by blocking everyone vaguely in line of sight of these accusations who doesn't already believe her innocent
cool. cool cool cool.
Way to go aiding nazi stalkers from Kiwifarms on a post about a useful blocking feature
I would like to see this feature implemented.
Just make it opt-in for Serveradmins and deactivated by default if you are so afraid of people misunderstanding the function.
But giving people more control on what they receive and see in their notifications and timeline is always a good idea.
Edit: ><((((*>
Just make it opt-in for Serveradmins and deactivated by default if you are so afraid of people misunderstanding the function.
But giving people more control on what they receive and see in their notifications and timeline is always a good idea.
Edit: ><((((*>
Id be cool with this.
As a person who's experienced harassment campaigns on other social media platforms, a tool that would allow me to block a user plus whoever follows them would be extremely useful.
Is there any challenge in implementing such a feature when the user you'd block has chosen not to make their follower list public?
I would like a context menu on a toot saying: block everybody who liked this toot. This information is already available.
As for why I think this is important: when the mob is after you, it is important to not only block harassers but to also block future harassers: people who like the harassment, for example. What we need is the ability to block faster than mobs grow. It's about rate of growth: blocks vs. mob.
I want this for the API as well so that external tools are easier to write.
I would like a context menu on a toot saying: block everybody who liked this toot. This information is already available.
As for why I think this is important: when the mob is after you, it is important to not only block harassers but to also block _future_ harassers: people who like the harassment, for example. What we need is the ability to block faster than mobs grow. It's about rate of growth: blocks vs. mob.
I want this for the API as well so that external tools are easier to write.
This would be a very useful tool.
Mastodon have "Hide my network" to prevent this kind of behaviour.
Mastodon already have "Block notification from any person that I/They don't follow" option. Just use it.
Those features do not do enough to mitigate an attack like this
The famous chainblock did make many people wondering "Why I'm blocked?" on the bird site (and frustrated them). I was one of them, by following some agressive people. But following does not mean "I definitely agree all of their thoughts", isn't it? So I think "friendly fire" could be a lot more than one think who use chainblock feature. Someone even calls this feature as "Chain poo" because this encourages hate culture.
But I agree to "Mastodon always need efficient ant-harassment tool", and It would be good to have feature something like chainblock (for whom suffering too bad attaks) but that should be optional one, defaulting off (agree to @ClundXIII )
It would be great this to be optional feature or 3rd party one, if implemented.
anyway, If it is implemented I'll decide to hide my network to prevent friendly fire of my followers.
I know a few people from Twitter who follow and retweet accounts with complete different opinion, mostly politicians from the mid east (most times they have a "retweet =/= endorsement" in their Bio).
While I have not seen such thing on the fediverse yet it might make more sense to to go into the "ban all who like this post" direction.
Having the follower-chainblock feature work with hidden follower lists would of course break the privacy of hidden follower lists. One more reason to go with the like-based feature instead since it will be WAY more effective.
It will also slightly discourage people from liking posts including harassment which is a nice side-effect. Less "like-fame" for the harasser.
"Block everyone who boosted this" might also be effective wrt mobs like "block everyone who liked this."
I am a firm believer that anti-harassment tools should not wait for first-party support, so I wrote a tool that (I hope) achieves similar goals.
Mastodon De-Mob allows users to block everyone who boosted or favorited a harassing toot. It also reports the harassing toot to the user's moderators. My hope is that the reporting feature will encourage people to use this only for toots that are genuinely encouraging harassment (as opposed to for guilt-by-association tactics). This may not be 100% effective, especially if instance moderators don't push back on use for guilt-by-association, but I hope it at least goes some way towards mitigating the concerns raised upthread.
Mastodon Blocker is a command line tool that does the same thing (but no reporting to the admins). When Will Wheaton was kicked off from mastodon.cloud
I remember the reason being that the admin had received 60 reports over night.
Honestly theres nothing that compels an admin to act on a report in a specific way. If someone reports one of my users who i want there 60 times im going to ignore 60 reports.
@kensanata Yes, Mastodon De-Mob was inspired by Mastodon Blocker鈥擨 should have mentioned that in my original post in this thread. (I did mention it on Mastodon, but neglected to do so here鈥攕orry about that!)
As far as the "report to moderators" feature/issue, I agree that too many reports can be a vector for harassment too. I put a fair bit of thought into not making that problem worse; here are the ways I avoid it:
I hope that, with these mitigations, Mastodon De-Mob won't make the problem of frivolous reports worse. And, as I explain in the README, I think the report plays some role in preventing frivolous uses of this tool鈥攊t's all a balancing act.
I think the real way to deal with frivolous reports is with better tooling on the moderation front. @Gargron has said that Mastodon will get a moderation API in the next release, and I definitely plan to see what tools I can make to address that issue when the API is out.
If any version of this were to be implemented, would you expect it to be static, blocking people based on the follow/boost/etc graph at this instant, or dynamic, so that new follows or boosts also get blocked, and un-follows get unblocked?
Honestly? I think the heat death of the universe will come before any real anti harassment tools so i suppose it doesnt matter.
Most helpful comment
This seems like a terrible idea. "Guilty by association" especially when the association is something as inconsequential as social network follows is bound to be mostly false positives. Reminds me of the clusterfuck that was ggautoblocker on Twitter.