The first question that I (and at least some other folks) ask when they look at a new platform is: how will this be used to send me unwanted messages, and what can I do about it?
Even if the answer is, "we have no idea how to solve that problem", it would be good if the FAQ or other public documents addressed this directly.
This is a good start for a FAQ. I still don't understand how these things interact with federation. E.g. can someone just run a spammy/abusive server and ignore all reports?
Your instance is your gatekeeper in that case. Someone can run a spammy/abusive server, but your instance's admin can blacklist that server. Or a particular account from that server.
So I have to trust my admin. Of course, I have to trust my admin anyway, but I have to trust them to be on the ball, rather than just to benignly neglect me.
Can't a spammer just change their server's name? Or their account name?
It's not easy to do those changes (especially server name, since it involves purchasing a new domain name) and they are easy to block again.
a.example.com, b.example.com, c.example.com... No need to buy a new domain name.
Anyway, if the solution is that admins have to play whack-a-mole, then that's the solution. It's just worth documenting that so that users know what they have to worry about.
If you don't trust your admin you can migrate your account pretty easily (using follow and block list import/export) to a server run by an admin who you trust. At this point, harassment has not been much of an issue and cases of harassment have not been frequent enough for the whack-a-mole approach to be difficult.
One feature that had been brought up in the past but which had not been implement due to it not yet being needed was a switch that filtered out notifications from users with the default avatar (so "hide eggs" mode.) So if we start having an Egg Problem that is something that could be implemented. Right now it seemed low priority since we don't have people saying they're getting harassed by users with default icons.
FWIW, I think "default icon" is not actually what Twitter used (but I didn't work in that department, so check for yourself). Instead, it was the age of the account. Of course, without centralized accounts, that's trickier to track.
If I migrate my account, do my followers also have to migrate? Or is there some sort of forwarding?
The only reason that unwanted content is not yet an issue is that the platform is not yet popular.
Another feature that's been in the works since before twitter's recent mass-tagging problem is a solution to said problem, before it's actually been a problem. This solution would be thread-muting, a way to say "stop giving me notifications on this thread." This is a pro-active anti-harassment feature before anyone even tries to use this tactic.
As for rapid domain-changing, I'm not sure what could be done about that. I haven't done any work on the back-end, so it's outside my realm of knowledge.
A benefit of the project being open-source is everyone can contribute and, if somehow something happened to gargron, we could just fork the project, make new instances, and continue usage and development... if at a slower pace. Dissatisfied users are not beholden to anyone and have the ability to undo or change things they don't like. (For instance, the website used to be very low contrast, which I went and fixed. I know of someone developing an entire alternative web-interface, which they can do because of the open api.)
A downside to being open-source is we cannot control what other servers do. There was someone who set up a single-user instance and modified the code so that their instance did not have a character limit. This resulted in Go submitting #658 which makes long posts collapse, preventing someone from setting up an instance, following themself from another instance, and then spamming. Though the person who had made the instance without a character limit did not use it to spam, and is actually the same person who implemented user muting, but we recognized the potential.
Anyone can fork and make their instance simply not run any anti-harassment features we try to implement. Our instance would still have these features, but we can't dependably make servers have unique IDs as someone could just modify their code and change the ID. At present, there's no known way in the OStatus protocol to check another server for software and version. A server is a server, whether it runs mastodon, gnusocial, or postActiv. Even if we implemented some sort of handshake, someone could modify their code to falsely declare itself a compatible mastodon server when actually it runs something else.
One thing that limits this is that the federated timeline is only users who someone on that particular instance follows. If we achieve actual de-centralization (rather than most people being piled on mastodon.social) then in order to spam everyone, they would have to make accounts on every instance they wanted to spam and then follow their instance to hook the servers up. This is possible but increasingly a pain. And even then, users can use Local Timeline if someone started to do that, while they wait for the situation to be handled.
Unfortunately, whatever systems we implement to counter-act harassment, anyone committed enough will find a way to circumvent it. Trolls seem to have immense creativity in developing new tactics... which is terribly wasted talent. It'd be nice if rather than using those tactics they helped find ways to counter-act them. Even if we find a way to avoid user/server whack-a-mole there will always be method whack-a-mole. At this point, we have features already implemented or in development which counter-act every harassment method we have identified being potentially used. Except for the rapid domain-changing you brought up, which is definitely something we should make a plan to counter at some point.
If you have other concerns I'm happy to try my best to answer. A very large portion of our users and dev volunteers are LGBT ppl who have been targeted for harassment in the past, and it has been a huge priority for us to make this site feel safe. I myself have been doxxed even. Trust me that I'm always thinking of ways to combat this, though my capabilities personally are only front-end and otherwise all I'm doing is convincing gargron :P, but I know he cares deeply about this as well since harassment is one of the main things that drives people off of twitter, and being better than twitter at handling this will make or break the project.
Anyway, the request here seems to be to create a document or page specifically outlining what is being done to counter-act harassment. I'd be happy to help work on such a document if we could try to reach a consensus on where such a document would live.
Probably not fit for a Github issue, feel free to use Discourse
Most helpful comment
Anyway, the request here seems to be to create a document or page specifically outlining what is being done to counter-act harassment. I'd be happy to help work on such a document if we could try to reach a consensus on where such a document would live.