Add support for something close to BlockTogether.org:
Users can opt-in to share their block and mute lists to the public or to their mutuals. Other users can then subscribe to these block lists. Accounts mentioned on these lists
will be blocked unless already followed.
A more detailed specification in Gherkin format covers how blocks are added/removed depending on subscriptions, new preferences and menu options.
While some Mastodon instances have properly enforced code of conducts, some don't, or lays different expectations. Typically some cisgender men will feel entitled to make unrequested comments on selfies from women they don't know. While they don't feel it's an issue because they don't do it often, women and other minorities can have an online experience on Mastodon resembling street harassment: it's not one person doing a bad thing that makes it terrible, it's the never ending stream of entitled randos.
Having a way to share the burden of removing the harassers between multiple targets would make the situation not ideal, but much more bearable.
From looking at previous issues, I feel this is different and much simpler than #1092: the BlockTogether.org model of trusting someone to block people for you has proven easy to understand and setup.
The current specification does not consider #8680, e.g. GGAutoBlocker behavior of blocking followers of an account. Being able to share the work of blocking a mob should help already.
The specific use-case described earlier might be better solved by being able to specify who should be allowed to reply to a given toot (#8565).
The specification could be extended to solve #5521, the ability for an instance moderator to trust other instance moderators, but it felt like it could be implemented at a later time.
Shared block lists could help instance moderators to monitor most blocked users or instances (#5676). While this would require some extra work, I feel it would be less intrusive for moderators to only get stats from accounts willingly sharing their block list.
After a quick look, it seems Mastodon has all the necessary API required to port the code of BlockTogether.org, but for me, the ability to subscribe to a block list from the profile view would make the experience much better.
ActivityPub-wise, that could simply be Collection
s, but there are things to figure out:
Also, this proposal only deals with account blocks while there are also instance blocks/mutes. How should these be handled? They also add a layer of complexity regarding whitelisting (so far, it is not possible to whitelist an account from a domain you have blocked/silenced)
I think this might be a duplicate of Support for subscribing to communal block lists #116? Encouragingly, Gargron has already said he agrees. :)
Also relevant: #10096
My main issue with any proposal in shared actions is to consider how it affects propagation. If propagation is automatic and infinite, then a sufficiently-connected graph will inevitably result in everyone blocking everyone else by default. So the usual warnings apply for any web-of-trust-like solution: it is not enough to flatten the chain, you need to keep track of distance as well. Otherwise you just end up repeating the problem of BlockTogether, where people on a previous level wield outsized and disproportionate power in blacklisting certain users (e.g.: randi harper's transphobic personal list being imported by wil wheaton, whose list may be imported by any number of other lists, and so on -- any downstream list is in effect poisoned by its dependency chain).
The other issue is one of transparency and accountability. In the current ecosystem, blocks are decided by users and admins due to reports, and thus the reports can serve as documentation of why a certain block was added. So I would be much more comfortable if any system would instead forward the Flag
activities rather than simply a list of users without context. This would also combine well with the request to add a reason why a certain account was blocked or muted, e.g. #602 for users and #7122 for admins. Forwarding the reason would allow people to agree or disagree with certain judgements, and it would provide self-documentation for moderation purposes.
Add support for something close to BlockTogether.org
BlockTogether.org uses the Twitter API. Mastodon exposes similar APIs for managing blocks and mutes. So support for something close to BlockTogether.ord is already here, a tool like that needs to simply be created by someone. It might even already exist but not be widely known...
re: something "close to blocktogether", i would actually consider blocktogether a bad implementation because it does not really solve its stated problem, and it instead exacerbates different ones. a proper solution should pre-empt the proliferation of a bad solution.
Also relevant: #10096
My main issue with any proposal in shared actions is to consider how it affects propagation. If propagation is automatic and infinite, then a sufficiently-connected graph will inevitably result in everyone blocking everyone else by default.
My proposal had no propagation at all. I had a scenario for this in sharing.feature:
Scenario: Subscribed blocks do not propagate to my own shared list
Given I am not blocking [email protected]
When [email protected] adds [email protected] to her block list
Then [email protected] does not appear on my shared block list
I would like to be able to maintain a block list that I can subscribe to and that others can subscribe to, that have particular names and purposes.
So instead of it saying "subscribe and automatically block the people that Cassolotl blocks", it might say "subscribe to this blocklist, maintained by Cassolotl. This blocklist only contains trans-exclusionary radical feminists." Or something.
Ideally there would be a private UI where I could type a little note to say why they're on the blocklist, like "said that trans women are men in dresses" or something, or links to particular posts of theirs that made them block-worthy, so that if they ask to be unblocked or if someone asks why they are blocked I can refer to it.
Edit: Looks like #116 is basically what I'm describing here!
Block lists are exploitable and create perverse incentives.
Block lists are exploitable and create perverse incentives.
which imo is why a less-exploitable alternative should pre-empt the widespread proliferation of naive blocklists. trust grants power. naive trust leads to abuse of power. any power should be auditable and should not supersede the user's own authority.
Block lists are exploitable and create perverse incentives.
which imo is why a less-exploitable alternative should pre-empt the widespread proliferation of naive blocklists. trust grants power. naive trust leads to abuse of power. any power should be auditable and should not supersede the user's own authority.
You should have no blocklists. They are either well meaning but harmful or a tool of power for the malicious. Theres no right way to do one.
You should have no blocklists.
in the absence of blocklists, people will simply create their own social patterns for disseminating blocks. "cw: recommended block, this person is a fascist / rapist / pedophile / etc" is already a thing that happens. luckily people can try to verify for themselves by browsing their profile, but yes, it shouldn't be automatic by default. again, ultimate authority should rest with the user in what they accept/reject for themselves.
You should have no blocklists.
in the absence of blocklists, people will simply create their own social patterns for disseminating blocks. "cw: recommended block, this person is a fascist / rapist / pedophile / etc" is already a thing that happens. luckily people can try to verify for themselves by browsing their profile, but yes, it shouldn't be automatic by default. again, ultimate authority should rest with the user in what they accept/reject for themselves.
A perfect example of ad-hoc organization for blocklists are the "blocked domain lists" in the git repo of many instances. My instance publishes ours, as for many others (That I've borrowed some from as well).
Users subscribing to their own individual blocklists would grant more power to the users, which is hardly ever a bad thing. Its not like we're enabling users to delete other user's accounts, but rather silencing them, for the individual themselves.
Every communal blocklist on Twitter or Mastodon so far has ended up being weaponized by bigots, abusers or other bad actors (ggautoblocker being run by a TERF is a good Twitter example). I know this type of feature may seem like a useful tool for protecting women and minorities from harassment at first glance, but in practice shared blocklists are actually used to isolate, harass and abuse the most vulnerable.
As such I think any official support for such a feature would set a very bad precedent.
Sorry but I'm really confused. If users are free to subscribe to block lists from people they trust, can review block list content, can override a single block they believe misplaced, or easily unsubscribe in case they stop trusting the source, how could it be weaponized?
What I had in mind was people subscribing to their friend block lists, so that they could eventually discuss each other decisions, but not those from people at random or central authorities. That might be why I fail to imagine how this would turn bad.
Sorry but I'm really confused. If users are free to subscribe to block lists from people they trust, can review block list content, can override a single block they believe misplaced, or easily unsubscribe in case they stop trusting the source, how could it be weaponized?
What I had in mind was people subscribing to their friend block lists, so that they could eventually discuss each other decisions, but not those from people at random or central authorities. That might be why I fail to imagine how this would turn bad.
Because people with the most social capital will become defacto central authorities who develop blocklists big enough that people cannot audit them with ease. What you had in mind is not how people behave.
The problem is that, historically, people do not discuss but generally blindly import the blocks.
That's been the problem with blocktogether lists, as well as 'community' shared blocklists such as dzuk's instance blocklist.
This leads to all sorts of meta drama about the contents of these blocklists, which this feature will amplify. We can go all the way back to the 1980s on USENet with shared killfiles to see that it usually plays out this way: the feature gets abused in such a way that it leads to fragmentation and factions.
Reading the chapter on USENet in the UNIX-HATERS handbook may be insightful for anyone designing moderation systems, as to avoid repeating history with new implementations of bad solutions.
Beyond this, the whole pitch for the fediverse is that you have a nice neighborhood your account lives in (instances), and the HOA (admins) keep your neighborhood up to standards. Implementing a feature like this effectively abdicates the HOA (admins) of any responsibility to do their job, since their job is effectively replaced with vigilante justice.
@Laurelai You seem to have strong opinions about block lists based on first hand experience. Would you have any suggestion on how to solve the fact that friends are leaving Mastodon because they are sick of getting annoying randos popping up in their mentions? I could imagine them telling their admins to block instances where such behavior are considered normal, but that means they won't be able to interact with people on these instances who know how to respect boundaries. #8911 was about being able to override instance blocks, but you also considered this a bad idea
So what does it leave? As I worte in the initial report, I believe #8565 might help, but I'm not sure it'll be good enough.
I wrote a thread on fedi about some ways to mitigate the unsolicited interaction problem (randos showing up in mentions), a basic solution is to allow users to control what interactions are presented to them in notifications:
If you select the latter two levels you are much less likely to have your time wasted by attention vampires.
This is a real solution to the problem, one that I am working on implementing elsewhere in the fediverse. It would be nice to see Mastodon join us instead of pursuing the 2019 equivalent of shared killfiles.
Also interactions with your instance only would be a nice one to have too.
But beyond those things having an attentive admin and users willing to use the report feature is the best solution. If they arent willing to report bad behavior to be dealt with then they probably shouldnt use social media.
Like one of the big problems with for profit social media is reports get ignored or handled badly. Thats something mastodon is actually pretty good at, compared to say twitter. Mods seem for the most part to actually care about their users. So that means the users have to care enough about the community to report bad actors. Admins arent psychic and its unreasonable to demand we know in advance when someone is a bad actor.
I felt really uneasy reading your last reply. It feels like blaming the targets of harassment for the shitty experience they get because they don't report enough… An easy to understand behavior when there's usually no feedback (#1685), no reactions from some instances, or just repeated bad behavior coming from different individuals. If you don't think shared blocks are the right option, then some visibility on code of conduct enforcement is really missing. Said otherwise, if more power for users is undesirable, moderators should be made more accountable about the power they have. Or?
(One might want to look at the relevant sections of Valerie Aurora and Mary Gardiner's book How to Respond to Code of Conduct Reports about visible enforcement, informing target and harasser, and communicating the response to others.)
I felt really uneasy reading your last reply. It feels like blaming the targets of harassment for the shitty experience they get because they don't report enough… An easy to understand behavior when there's usually no feedback (#1685), no reactions from some instances, or just repeated bad behavior coming from different individuals. If you don't think shared blocks are the right option, then some visibility on code of conduct enforcement is really missing. Said otherwise, if more power for users is undesirable, moderators should be made more accountable about the power they have. Or?
(One might want to look at the relevant sections of Valerie Aurora and Mary Gardiner's book How to Respond to Code of Conduct Reports about visible enforcement, informing target and harasser, and communicating the response to others.)
Reports should definitely send a notification to the reporter when its resolved and what the resolution was.
And i dont blame targets of harassment since i am a frequent target for harassment. I dont often comment on github because they tend to show up wherever i go. Its one of the main reasons for running my own instance. Ive had people harassing and stalking me for upwards of seven years. Ive seen it all.
Would you have any suggestion on how to solve the fact that friends are leaving Mastodon because they are sick of getting annoying randos popping up in their mentions?
having an attentive admin and users willing to use the report feature is the best solution. If they arent willing to report bad behavior to be dealt with then they probably shouldnt use social media.
I've never met an admin who was willing to block someone from their entire instance just for randomly messaging people they don't follow with unsolicited opinions...! Reply guys are so prolific and so obviously not actually breaking any serious rules that all an admin ever does is recommend a personal block.
I've never met an admin who was willing to block someone from their entire instance just for randomly messaging people they don't follow with unsolicited opinions...! Reply guys are so prolific and so obviously not actually breaking any serious rules that all an admin ever does is recommend a personal block.
You have met one now. You are welcome to create an account on my instance.
I mean, I don't want to block someone from an entire instance just for being rude. :D I feel my point has been missed but I'm too tired to articulate it, so never mind!
I mean its not just being rude, being a reply guy is a form of harassment and an indicator of other worse behavioral trends. A lot of them go from zero to extremely hostile if you so much as politely tell them to leave you alone. And i dont mean just one or two replies to an interesting post either i mean, well you know what a reply guy is.
Yeah, I do! And often at the first reply it's not always 100% easy to determine whether someone's a guy who's replying or they're a reply guy, right? I'd love if someone else's experiences with a harasser could help me to block someone before they even see me.
Im sympathetic. I really am. We need better solutions, but blocklists arent one of them.
Having to trust our admins when we join an instance is fediverse culture. I think having to trust the maintainer(s) of your blocklists could very easily be fediverse culture too.
If, as is likely to be the case, people sign up to blocklists maintained by people they don't know, and they end up blocking a bunch of people who they don't want to have blocked, who cares? That's their loss. If someone like this finds out that they're blocking people they don't want to block they can just unsubscribe from the blocklist(s).
Being the one added to a blocklist obviously sucks, but do you want to interact with someone who has thoughtlessly subscribed to a malicious blocklist? If you find out you have been unjustly added to a blocklist you can post like "I've been added to this blocklist and it is unjust, if you are a nice person you should unsubscribe." It increases the chance of drama, but like... a lot of features do that while also helping a lot of people. :P (Obviously being added to a blocklist that a lot of nice people subscribe to is not fun, but that's not a malicious blocklist issue.)
I don't think "I don't want to be blocked by thousands of people en masse" is a good enough reason to prevent a very helpful feature from being made. If I got added to a bunch of blocklists I wouldn't be like "oh my god, we must do away with blocklists", I'd be like "wow, either all of these people are thoughtless with a few malicious people, or I have done something genuinely objectionable, and chances are it's a bit of both."
For someone who has been unjustly blocked, they have to approach the blocklist maintainer and ask to be removed. On BlockTogether that's very difficult, because you get horribly interlinked webs of complex blockiness, because BlockTogether is about personal individual accounts sharing blocks, rather than a specific tool that only shares blocks. If we do this right we can make sure that a blocklist _isn't_ a normal fediverse account that shares blocks, BlockTogether-style, and that blocklists therefore can't share blocks with other blocklists. So it'll be a lot easier for someone unjustly blocked to find the blocklist where they are blocked and ask to be removed - they won't have to chase the block across several interlinked automatically-adding-people blocklists maintained by dozens of people.
If we do it right the blocklist can also explain why each person is blocked, and list criteria for why they block people generally, meaning that as long as you trust your blocklist maintainer you are less likely to end up accidentally blocking someone you might otherwise like to interact with. It would make blocklist maintainers more accountable. If you've been added to a blocklist and you can see who added you and why, that makes getting yourself removed and unblocked much easier.
I absolutely get that being added to a popular blocklist is a horrible experience. So let's not leave it to randos to code something third-party based on what they've got available to them, because that will result in another BlockTogether, which we've established is broken. Based on the discussion I've seen here and in #116 I think we can easily make a list of the recognised problems with BlockTogether and find a way to reduce or remove them. Let's make something good that is built into the fediverse and makes for accountability, transparency, and ease of appeal and removal.
Only leaving a comment here, just to be informed when the echo chamber gets implemented
@Serkan-devel You can just click the subscribe button. :)
Having to trust our admins when we join an instance is fediverse culture. I think having to trust the maintainer(s) of your blocklists could very easily be fediverse culture too.
If, as is likely to be the case, people sign up to blocklists maintained by people they don't know, and they end up blocking a bunch of people who they don't want to have blocked, who cares? That's their loss. If someone like this finds out that they're blocking people they don't want to block they can just unsubscribe from the blocklist(s).
Being the one added to a blocklist obviously sucks, but do you want to interact with someone who has thoughtlessly subscribed to a malicious blocklist? If you find out you have been unjustly added to a blocklist you can post like "I've been added to this blocklist and it is unjust, if you are a nice person you should unsubscribe." It increases the chance of drama, but like... a lot of features do that while also helping a lot of people. :P (Obviously being added to a blocklist that a lot of nice people subscribe to is not fun, but that's not a malicious blocklist issue.)
I don't think "I don't want to be blocked by thousands of people en masse" is a good enough reason to prevent a very helpful feature from being made. If I got added to a bunch of blocklists I wouldn't be like "oh my god, we must do away with blocklists", I'd be like "wow, either all of these people are thoughtless with a few malicious people, or I have done something genuinely objectionable, and chances are it's a bit of both."
For someone who has been unjustly blocked, they have to approach the blocklist maintainer and ask to be removed. On BlockTogether that's very difficult, because you get horribly interlinked webs of complex blockiness, because BlockTogether is about personal individual accounts sharing blocks, rather than a specific tool that only shares blocks. If we do this right we can make sure that a blocklist _isn't_ a normal fediverse account that shares blocks, BlockTogether-style, and that blocklists therefore can't share blocks with other blocklists. So it'll be a lot easier for someone unjustly blocked to find the blocklist where they are blocked and ask to be removed - they won't have to chase the block across several interlinked automatically-adding-people blocklists maintained by dozens of people.
If we do it right the blocklist can also explain why _each person_ is blocked, and list criteria for why they block people generally, meaning that _as long as you trust your blocklist maintainer_ you are less likely to end up accidentally blocking someone you might otherwise like to interact with. It would make blocklist maintainers more accountable. If you've been added to a blocklist and you can see who added you and why, that makes getting yourself removed and unblocked _much_ easier.
I absolutely get that being added to a popular blocklist is a horrible experience. So let's not leave it to randos to code something third-party based on what they've got available to them, because that will result in another BlockTogether, which we've established is broken. Based on the discussion I've seen here and in #116 I think we can easily make a list of the recognised problems with BlockTogether and find a way to reduce or remove them. Let's make something good that is built into the fediverse and makes for accountability, transparency, and ease of appeal and removal.
Ive adressed most of this in the other issue (#116) and id like to add no, you do not understand at all what its like to be mass shunned by people via automation. If you did you wouldnt say its no big deal. Especially when a lot of marginalized peoples income online relies on being able to reach a lot of people.
There is no right way to do this, and the means to cobble a third party version should not exist. Instead of saying "oh well lets just make it because someone will" you should be demanding the API to be changed to make such a thing impossible to implement. Im really getting sick if having a bunch of people repeat the same talking points over and over and not listen when they are told why it wont work.
If you did you wouldnt say its no big deal.
???
Im really getting sick if having a bunch of people repeat the same talking points over and over and not listen when they are told why it wont work.
You can't actually know that it won't work, because you're not just shutting down a third-party bodge, you're shutting down all attempts to custom-build something fit-for-purpose and all attempts at compromise. These issues are just people coming up with legit suggestions to make something that helps people without hurting them, and you shutting everyone down based on your experience of something different that everyone acknowledges is not fit for purpose. If you are the last person standing it's because you have more stamina.
You can't actually know that it won't work
I can and i do. Ive helped run blocklists, ive dealt with other blocklist maintainers. Ive seen them used for malicious reasons and it happens every time they get big. Ive seen them designed in many ways to try to prevent abuse and they wound up being abused.
because you're not just shutting down a third-party bodge, you're shutting down all attempts to custom-build something fit-for-purpose and all attempts at compromise.
Because its at its core a bad idea. It doesnt matter if you want to juggle two bottles of nitroglycerin and someone talks you down to one, you are still going to regret it and so will everyone standing near you.
These issues are just people coming up with legit suggestions to make something that helps people without hurting them, and you shutting everyone down based on your experience of something different that everyone acknowledges is not fit for purpose.
Its almost like people are ignoring the experts in favor of what they want. That happens a lot in life. Its not just "this wasnt built right" its "theres no way to build this right" Theres no way to make it do only helping without hurting. You will hurt people with this. Thats inevitable and unacceptable.
If you are the last person standing it's because you have more stamina.
You have a lot of motivation when you are pushed by sheer unmitigated terror.
you should be demanding the API to be changed to make such a thing impossible to implement
this is a non-starter because then you wouldn't be able to manage your blocks via any 3rd party app.
anyway i think it's pretty clear that subscribing directly to a list of people to block is a bad idea because blindly importing/trusting stuff is a bad idea. i don't think it's productive to keep re-treading that conversation. restating what wxcafe said in https://github.com/tootsuite/mastodon/issues/1092#issuecomment-292971803:
This system is bad for two reasons: one, it moves the moderation responsibility from admins to users, which makes the platform more difficult to use and to trust for people who get harassed, and two, it's not as efficient: every user has to subscribe to the shared blocklist. It only removes the blocked users from the federated timeline for users who subscribed, and so the environment for new users isn't safe.
so that's that, imo. a sufficiently advanced community-level moderation will supersede the need for users to do their own management and it eases their burdens. it can of course be said that blocktogether-style blocklists almost entirely stem from twitter's failure to moderate its own network, thus putting the burden on the users instead, and the most influential users can abuse their status to get the masses to block people they have grudges with or whatever.
but i can say this: in the absence of a pre-emptive solution to the underlying causes of why some people might want managed/shared blocks, then you are de facto ceding that ground to the worse implementations that will certainly not spend as much time thinking through the consequences of what they build.
at the absolute, most basic level, what's stopping anyone from simply going to settings > export and getting a CSV of every single account they block, and then periodically sharing that list for people to manually import? that is probably the worst possible outcome in terms of power-abuse and lack of transparency, no?
and it's already possible today. the only thing that keeps it from being widespread behavior is that it's largely deemed unnecessary due to the report system and the fact that actual humans deal with those. yet there are still requests like this issue and the many others linked in the top-level comment for user-level shared blocklists. that should be reason for examination, no?
so rather than repeating endlessly that blocklists are bad and rehashing the exact same arguments, let's stop to consider why those requests are being made, so that the underlying cause can be addressed and these requests stop being made. i've said a lot in #116 about how this moderation stuff can be done at a community level that's even more readily obvious, in a way that pools the effort of various mods rather than duplicating it, in a way that allows for the separation of the community layer and the system-administrative layer. something that would make it easy for anyone to participate in a community without regards to the software they are using. and i'd like to see all the alternative proposals and analyses about this too, so that auto-blocklists don't keep getting suggested and we can do something more productive with our times.
I feel like this guy, talking about "but sometimes". (YouTube video)
But yeah, if like @trwnh says there is another solution that obviates the necessity of blocklists then I'm all for that too. :) It's just that blocklists is the only solution I've seen so far that I'd actually be comfortable with, because I could choose to subscribe to the blocklists of people I trust, and if I turn out to have misplaced that trust I can just unsubscribe.
I get why people are requesting it. I honestly 100% get it. I used to be one of those people. When people are wanting blocklists that shows a failure of moderation and if moderation failures are happening its because the mods lack the tools or they lack the empathy to use them. One of them can be solved by making better tools for the mods, the second one is solved by moving to an instance whose mods care.
So we have to identify the failure. Are we lacking in mod tools or are we lacking in good mods.
So we have to identify the failure. Are we lacking in mod tools or are we lacking in good mods.
To come back to what I said earlier, if I compare the current Mastodon handling of reports and the recommendations from Valerie Aurora and Mary Gardiner's book (which I strongly suggest reading) and reflect about friends' experiences, I feel we miss:
I want to trust @Laurelai, and I believe this should be tried before shared block lists. But I hope progress could be made soon. The current situation is really frustrating for many.
tools to enable users to trust the moderation process,
Like people getting feedback on reports? I agree we need that like yesterday. I have to manually tell users i acted on my reports and thats a pain in the butt. Id rather have it be automatic.
tools to help moderators publicize their decisions (not shaming specific people but telling the community about their work and making unacceptable behaviors visible),
Im all for a public modlog option. Id make my instances modlogs public.
easier way for users to make visible what kind of interactions they'd like (which makes boundary crossing as easy to rule when reported).
Agreed. Things like grandulated user controls on posts for example? Instance only posting and such. You will get less reply guys if you can control who can even see your posts in the first place for example. Though i think more mods and admins should take unwelcome and unsolicited constant replying to women as the serious issue that it is.
Another tool that might be useful is if you could make a list of people that your toots could only be seen by as well. Kinda like how you can do that on facebook where you can select specific people to see a post as well as making it so you can exclude specific people from a post.
Another useful tool that people could have is chain blocking where i can block a bad actor and all of their followers as well. Bad actors tend to run in circles.
There's multiple failures, but it is easy to move forward and fix a lot of them:
Depending on situation, the moderation failure can potentially go either way -- either too heavy of an action is taken, or too little of an action. This leads to people having less faith in the ability of the moderators to be fair and impartial. However, we need to build better tools which enable moderators to do a better job.
The reason why moderation is flawed in this way over in this part of the fediverse is that one of the two primary software platforms used has moderation tools that are perceived to be excellent on the surface but are actually quite limited in nature.
Having participated in moderating Mastodon, Pleroma and GNU Social instances I have a few observations: In Mastodon (and to a lesser extent in GS), most of the moderation tools are heavy handed and aren't built in a granular nature. What we need is a more granular approach to moderation that allows for the appropriate touch to be used in all cases (which is what is being built in Pleroma).
Reports as presently implemented in Mastodon have no mechanism for followup by moderators. We have made some headway on building federated reporting that will eventually have a followup mechanism though, for example, federated reports now have stable IDs. We need to determine the vocabulary used for communicating followups and dispositions of reported content though. Once that is in place, then we can send notifications out that show the disposition of the report. With stable IDs, this allows the followups and disposition updates to remain anonymized if desired.
Mastodon only has the 4 visibility scopes (as does Pleroma right now), GNU Social has no visibility scopes. If we add support for addressing lists it will go a long way to solving this particular issue.
The "reply guy" problem (as described in this bug) is largely a problem because it wastes people's time dealing with people they don't care to interact with anyway. This is what makes the "reply guy" problem annoying: you're dealing with replies from people you have no interest in interacting with anyway. The solution is to build software which respects the attention you're paying to it. Distributed blocks don't solve that problem.
At any rate, shared user-level blocklists will not solve any of these problems, but solving these problems will, in general, solve the frustrations which lead to the proposal of shared user-level blocklists.
Put differently, instead of inventing yet more ways to block bad actors, it is better to defang their ability to act badly to begin with.
if I turn out to have misplaced that trust I can just unsubscribe
the problem is that most people never find out. that's bad for community, but less bad than dealing with abuse and harassment. which is why trying to safeguard against the problem directly is much better than letting it get to that failed state where users have to fend for themselves.
Are we lacking in mod tools or are we lacking in good mods.
mod tools need to be improved but also the labor of mods needs to benefit the most people possible. every time there's an uptick of spambots they have to be suspended from multiple instances, i just think it'd be more efficient to have them suspended on a meta-network level. if we can establish a meta-network that moderates based on consensus to block spammers and then it becomes a bit less necessary to have to care "which instance should i join" which is a huge problem for potential users. you could easily set up your own instance and still benefit from the meta-level moderation. i know "the mastodon network" is a meme, but having well-behaved instances participate in a relay with meta-moderation to provide a solid baseline of anti-spam/etc would be pretty cool. local mod decisions can still be applied on top of the global relay-mod decisions so you can still choose to join someone else's instance (as i laid out in #116).
tools to enable users to trust the moderation process, tools to help moderators publicize their decisions (not shaming specific people but telling the community about their work and making unacceptable behaviors visible), easier way for users to make visible what kind of interactions they'd like (which makes boundary crossing as easy to rule when reported).
i would be in support of all those things, yeah. transparency in reports so modlogs can be audited, notifications when reports are resolved so that people know their reports are valued, and even just basic social stuff like having moderators actually mediate in situations to help de-escalate conflicts and correct behaviors in cases where people are unaware they did anything wrong. silently blocking someone is a sign that something went wrong in that process, it's a fundamentally hostile response that is only necessary when boundaries are being violated in a way that is unaddressed.
[various features being suggested to control audience and capabilities]
aspects/circles/audiences/addressing/visibility should be exposed to users imo https://github.com/tootsuite/mastodon/issues/7182#issuecomment-414786683 but this should be distinct from the currently-existing "lists" feature which is for sorting what you see, not what others see. i actually think there's an exciting possibility to do this stuff on top of the recently added relationship manager too as i said in #10306
i also agree with kaniini that moderation should be granular and scoped rather than a binary choice between ignoring and nuking someone. and in the longer term as well we should be exploring a capability-based approach to each post, or at least a way to signal the wishes of the author, e.g. "comments are disabled on this post so don't bother replying because we'll reject it" should be formalized with an actual Reject
activity or something.
https://github.com/tootsuite/mastodon/issues/10304#issuecomment-474599011
at the absolute, most basic level, what's stopping anyone from simply going to settings > export and getting a CSV of every single account they block, and then periodically sharing that list for people to manually import? that is probably the worst possible outcome in terms of power-abuse and lack of transparency, no?
This already happens, at least at the instance admin level. While I don't blindly follow every block list posted by instances, I place trust in those instance maintainers when they do block other instances and users, and add them my instance's block list.
And, there is nothing to prevent this from happening out of band, and I'm certain it already is happening. And, I'm certain it's a worse solution due to lack of transparency as you've mentioned.
This is the problem with doing "nothing": People organize, and often off load their trust to other users. In fact, there are entire systems built around webs of trust, ie GPG key signing parties. So, if it's not done in-band, in a manner that CAN offer transparency, we WILL get even worse solutions, like a BlockTogether for Mastodon, complete with it's problems we've already identified.
at the absolute, most basic level, what's stopping anyone from simply going to settings > export and getting a CSV of every single account they block, and then periodically sharing that list for people to manually import? that is probably the worst possible outcome in terms of power-abuse and lack of transparency, no?
Because its a pain in the butt and its not automatic its not a major issue. Im sure it happens but it requires active effort to maintain such a system, effort that falls off over time. It doesnt scale up beyond a few people very well and thus im not too worried about it. An automated system is low effort for the end user and thus will draw more people, increasing the sum total of power that can be abused.
To quote someone i know, if we cant trust mastodon users with quote toots what makes us think we can trust them with block lists.
Blocktogether has, in practice, been a way to crowdsource violence against marginalized people that grifters (often marginalied themselves but very often with an extremely tenuous claim to such) with an excess of social capital can use to farm additional social capital. I do not want to bring that grift to the fediverse; it should stay on Twitter where it belongs.
I had today a very unpleasant experience on the Fediverse.
I have a lot of friends on this social medias and they are from different instances. A lot of admins know each others and work together to create nice places for their communities.
Today, a member of this community was verbally assaulted. I reported the user assaulting them and make a public post about it. I took the time to take the screenshots on my phone, to edit them to anonymize it so the victim could not be recognized and more harassed than they already were, to post them and to write an understandable message with convenient CWs. It took a lot of my time and of my energy.
Then I poked some admins I know on my post to warn them. It took again some time and energy. Those admins told me that they already muted and blocked the user I was reporting.
Then, people started to harass me under my own post. I received a few notifications from them that I quickly removed by reporting then silencing&blocking them. It also took a bit of my time and energy. You can have a look at the whole shitstorm here: https://freespeechextremist.com/notice/9lbzOI1jIQbFtXL9XM (also, they think I'm the one who interacted with the first harasser, but I'm not, so the whole whiny thing about them being victims of harassment is ridiculous).
And all of it would not happened if admins, moderators and users could federate their blocklists when they work together:
@Gargron you and only you have the last word on every feature Mastodon implements. You need to take a decision quickly about how you want to manage it. Right now, with all the users coming from freespeechxtremists platform, us, the minorities, are in danger. You shouldn't spend time on implementing new features. Your job now is to stabilize your platform and your community. Do you want your community to be composed by awful people like the ones harassing us? Because if you do nothing, that's what will happen. You NEED to talk with your team about this BIG issue and you need to find sustainable solutions.
I will copy paste this on every issue talking about blocking if I think it's relevant, feel free to delete if you want to keep it on a single place.
@DarckCrystale I'm very sorry that you had an unpleasant experience today, but how would federated blocklists have prevented it? Ultimately the responsibility still lies with your admins (e.g. the admins of pipou.academy) to suspend abusive users/instances. If pipou.academy's admins have not seen fit to suspend an instance as prominent as freespeechextremist, then would they see fit to subscribe to another admin's blocklist? Although as a user, you also have the avenue of blocking domains yourself by clicking "hide the entire domain" on profiles of users from those domains, ideally this would not be necessary because your admins would have already suspended the abusive domains/users. That they had not done so prior to today is worrying. Have reports been ignored up until now? Have they not been filed at all?
To reiterate earlier discussion, I do think that it would help to have an audit log of Flag activities, and perhaps it could be possible to forward Flag activities.
Some instances are more heavy-handed with blocking than others. Any instance that doesn't block FSE is probably "the others". It would be advisable to look for an instance that your friends are on that block instances such as FSE if people being obnoxious trolls is greatly upsetting to you.
people being obnoxious trolls is greatly upsetting to you
Nope.
People harassing me and my friend will kill me.
That's not only trolling, and that's not only upsetting.
@trwnh see #11510 for further discussion of what could be done about my particular problem.
I think the dev team NEED to come up with ideas and like, spend time thinking about how to improve the platform for minorities more than they currently do.
people being obnoxious trolls is greatly upsetting to you
Nope.
People harassing me and my friend will kill me.
That's not only trolling, and that's not only upsetting.
i mean big same, thats why i run my own instance and am quick on the defed trigger, but you WILL exhaust yourself and burn out your spoons reacting to trolls trying to get you to react the way you are. remember they think that getting visibly upset is funny, so take or leave my advice but as someone who has been around the block a few times, it's really helpful to your peace of mind and safety, and that of your friends, to vent about these things in private spaces instead of out in public where it will be used to fuel further harassment.
this is also why instead of being on an instance that hasn't bothered to block FSE, i highly suggest finding one that has, for starters. there is nothing the mastodon project can do about shitty instances, even if they're running mastodon (FSE is not even running Mastodon), but there is something YOU can do by choosing a server--or running your own--that is aligned with your needs and safety in mind.
to vent about these things in private spaces instead of out in public
I didn't vent in public :confused:
this is also why instead of being on an instance that hasn't bothered to block FSE, i highly suggest finding one that has, for starters.
I know the admin of my instance, I know they have a life like anyone else and they can't just be robots plugged in the instance to answer all my needs. They did muted and blocked the problematic users/instances at once when they logged in like one or two hours after I reported it. I will not change to another instance if the problem comes for the model and the work of the tool and not from the people managing it. I choose this one because I know it fits perfectly my needs by the way it's managed.
Also as you can see there is not only FSE members talking in the thread.
Yeah those people are kind of The Usual Gang Of Assholes, so while it's good you have an in with your admin, it's not a question of if they're "plugged in 24/7" it's a question of if they're plugged in at all, if those people and instances aren't defederated already.
Maybe they'd be amenable to making you a mod as well, which would allow you to take care of these problems as they arise rather than relying on them to be around if it escalates quickly.
Your solution will work only in one case. You are not really addressing the whole problem.
I don't think you understand that Mastodon is not Twitter on a fundamental level. There is no central "Mastodon" server. Mastodon is software anyone can run on their own hardware and run their own social network as they see fit, and out of the box it can connect with software that speaks the same language (ActivityPub) for a broader social network. This means that assholes like Jack cannot control our experience by say, banning leftists for posting "terfs get the wall" while letting nazis run free. But this also means that if nazis want to run their own server they can--and do, see gab.
There is no central authority to appeal to on the fediverse if you are having issues with other people on it, however, you have the power to control your own experience. There are some technical issues that limit some of this control at the moment, but one of the big items in development right now plugs that hole (to put it briefly, if you block a server they can still see things from your server because there is no identification when asking for content, and work is being done on supporting both servers identifying themselves when asking, as well as an option to only allow other instances to get your content if they identify themselves.)
Ultimately, you have to take responsibility for your experience on the fediverse. There's no way around that other than your admin knowing exactly what you want and doing it for you, which is a bit of an unreasonable expectation. A lot of care has been made to make Mastodon at least difficult for creeps and harassers to use to be creepy and harass, but it is impossible to plug every hole or stop someone determined with software alone.
I don't think you understand that Mastodon is not Twitter on a fundamental level.
I pretty do. Thanks for explaining anyway. That's why I told "the fediverse" in my first post.
There is no central authority to appeal to on the fediverse
Yes there is one: @Gargron who :+1: or :-1: the features he is ok to implement.
There are some technical issues that limit some of this control at the moment
And those issues are not high priority, because members of the mastodon dev team don't want to spend time on it. Instead, they want to develop new features, like trending hashtags for example. That's my problem: the project management.
that hole
I talked about it with a developer of Mastodon, I know this exists. But again, thanks anyway for explaining.
Ultimately, you have to take responsibility for your experience on the fediverse.
And project manager and devs needs to take responsibility for the tools they create.
A lot of care has been made to make Mastodon at least difficult for creeps and harassers to use to be creepy and harass
And a lot care still needs to be done as it is no way enough. My post on this issue was a reminder of that.
it's OK for the devs to work on mutiple things at once. authenticated fetch is in and working, and instances like mine are running on master in production to test it to make sure it works and is ready--it's a big change and there's been Problems. it's important to make sure it works right and there aren't any big holes in it so that the stable release doesn't cause big safety problems. no amount of pounding on a keyboard in development makes this go faster, it has to be tested with real instances for a while to weed out all the edge cases.
trending hashtags isnt my favorite feature either but the work being done on them is instead of lazily throwing The Algorithm at it, it is being designed so that trends have to be approved manually so that people cannot use trending tags to harass people.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Most helpful comment
The problem is that, historically, people do not discuss but generally blindly import the blocks.
That's been the problem with blocktogether lists, as well as 'community' shared blocklists such as dzuk's instance blocklist.
This leads to all sorts of meta drama about the contents of these blocklists, which this feature will amplify. We can go all the way back to the 1980s on USENet with shared killfiles to see that it usually plays out this way: the feature gets abused in such a way that it leads to fragmentation and factions.
Reading the chapter on USENet in the UNIX-HATERS handbook may be insightful for anyone designing moderation systems, as to avoid repeating history with new implementations of bad solutions.
Beyond this, the whole pitch for the fediverse is that you have a nice neighborhood your account lives in (instances), and the HOA (admins) keep your neighborhood up to standards. Implementing a feature like this effectively abdicates the HOA (admins) of any responsibility to do their job, since their job is effectively replaced with vigilante justice.