When I’m logged into mastodon.social, I can view the profiles and read the toots of users on instances that have suspended mastodon.social
Process I’m following:
1) Log into my account on mastodon.social
2) Search for user on an instance that has suspended mastodon.social
3) Select that user from the search results
4) View their profile and public toots.
Domain blocking works on the local database, i.e. when processing incoming data. It does not affect how public, unauthenticated resources can be accessed. In fact, the domain blocking screen clarifies this explicitly: The "suspend" option applies a "suspend" to every past & future individual account from that domain, meaning that their data is deleted & never stored.
It works this way, but it shouldnt. You designed it like this, and you shouldnt have. Thats the issue.
It works this way, but it shouldnt. You designed it like this, and you shouldnt have. Thats the issue.
I designed this feature within the constraints of OStatus in October 2016. Its primary purpose has been to remove offending/illegal content from one's server and prevent further harassment coming through from the blocked source. That purpose is fulfilled correctly.
Looking forward to your implementation.
But it doesnt prevent harassment. People from instances ive blocked can still see my posts and organize harassment against me, they have done it to others too. That purpose is not fulfilled correctly.
Looking forward to your implementation.
You literally get paid to code this. Its not too much to ask you to fix it when its not right.
But it doesnt prevent harassment. People from instances ive blocked can still see my posts and organize harassment against me, they have done it to others too. That purpose is not fulfilled correctly.
I said:
prevent further harassment coming through
This is the most that can be done under the constraints of the system, as people on other, independent servers cannot be prevented from writing what they want. It becomes more obvious when you subtitute "People from instances ive blocked" with "people who are not on any instances at all", as long as the content you are publishing must be publicly accessible.
If you do not need your content to be publicly accessible, you can broadcast private posts. Since messages from blocked servers are discarded, follow requests from them never appear and are never authorized, so private posts are not distributed to such blocked servers, and cannot be retrieved, since they are not public.
This is described in detail in the Mastodon privacy policy.
You literally get paid to code this.
I am paid to develop Mastodon. I am not paid to be a magician. If an architect tells you that you can't remove a wall because it's a load-bearing wall, they've done their job. If you want a miracle performed, try it yourself.
You still arent getting it. Because you never have faced serious harassment before. They can use their instance, that you cant see, but they can see you, to organize more harassment from other instances they are federated with and who you can see. All it does is hide the fact that they are doing this from you.
Ive lost count on the number of times ive had people on an instance ive blocked drag their shitty friends from other instances i havent heard of into a thread convo im having with someone else and not realize they were not only targeting me, but targeting the person im talking to and anyone making positive replies. All it did was make it so i didnt realize it was happening at first.
This is like me telling you that the load bearing wall you created probably shouldnt have uninsulated wires running through it because you for some reason insulated it with grass clippings and its a fire hazard, and your response is to just tell me to do it myself because you claim the drywall will stop any fires.
I want to make sure I understand this designed behavior:
1) Instance A is run by Nazis
2) Instance B has users who would be targeted by Nazis
3) The admin for Instance B suspends instance A to protect the users of B
4) Users on Instance A can still see users on Instance B and organize violence against them
Yes, correct, unfortunately. As I mentioned, there is no fundamental difference between Instance A in your example, and say, Kiwi Farms, in terms of being able to see public content and organize violence. To circumvent a block on Twitter, one need merely open the profile in a private tab.
As I likewise mentioned above, your only option is to go private.
If you come up with a solution on your fork, I'll take a look :+1:
"Blocks can be circumvented" is a piss-poor reason to forgo properly implement instance blocking.
Telling others to "do it yourself" is the height of arrogance.
Mastodon users deserve someone better.
The more barriers you put up the more people you deter, just because its possible to bypass doesnt mean most people will bother. Adding effort to attack people deters more attackers. I have 10 years of community managment experience and this is reality.
People like kiwifarms manage to dodge state actors from going after them (for now) but that doesnt mean you shouldnt try to make it harder to harass people with tools that arent state level actors. There is a difference between instance A and kiwifarms. Instance A is on the fedverse and can direct unwanted communication at you with ease, while kiwifarms has to come to the fedverse.
Theres a reason why it doesnt happen as often as harassment on the fedverse and the reason is that its more effort and some of their users simply find it too much effort to create yet another account on yet another website.
Adding features that deter and mitigate harassment are never wasted lines of code.
If activirty pub truly wont allow anything like people are wanting then it again is up to you to petition them to change or add features that will allow this. They are more likely to listen to you than to us and you know this. You also have already petitioned them for changes when it was things you cared about.
You seriously have a bad attitude. Telling people to fix it themselves when they point out the problems is the most condecending bad faith response in programming.
I dont have to be an auto expert to know that my cars design is flawed and is causing problems for me.
I dont have to be a food preparation expert to know when a refridgerator doesnt work as expected and my food spoils too quickly.
I dont have to go to the manufacturers and design the solutions for them.
If literally any other field of work did this in response to design flaws being pointed out there would be massive public backlash and rightfully so.
Why do you think this is an appropriate response?
I'm sorry about your woes, but there is no reliable way of blocking outgoing content based on a requesting domain, only incoming content where the offending instance has to identify itself.
However, your instance admin can implement IP-based firewall rules to block all incoming network traffic from IP(s) derived from the offending domain. But this is out of the scope of Mastodon, or any other decentralized social media platform for that matter.
but there is no reliable way of blocking outgoing content based on a requesting domain
There should be. Like. Why isnt there? Seems like a basic thing that a social media website would need. See when you have large groups of humans online who can interact with each other people need reliable ways to make sure people they dont want to interact with cannot interact with them.
your instance admin can implement IP-based firewall rules to block all incoming network traffic from IP(s) derived from the offending domain
As far as i know from we dont get Ip information from other instances users unless they directly view a link from our instance in their browser and not just within their instance and in that case its often hard to tell where they are coming from. And if for some god awful reason mastodon is federating their users IP addresses we have some major privacy issues.
There should be. Like. Why isn't there?
Because of the way DNS works. You can reliably get an IP address from a domain name but not the opposite operation. So you can never be sure an IP requesting your public post isn't actually coming from the mastodon.social
domain.
Seems like a basic thing that a social media website would need.
Social media websites are based on Internet technology, which means it inherits its benefits (can be accessed from anywhere in the world) but also their inherent drawbacks (inability to formally identify requests).
As far as i know from we dont get Ip information from other instances users unless they directly view a link from our instance in their browser and not just within their instance and in that case its often hard to tell where they are coming from. And if for some god awful reason mastodon is federating their users IP addresses we have some major privacy issues.
It's true, I only considered server-to-server communication. If the process @SelfsameSynonym described happens exclusively from the browser, then what I said is moot and there's nothing much that can be done.
Except.
If instances publish their list of instance suspension, then Mastodon.social could selectively hide users from instances that have marked mastodon.social
as suspended in its user search. This could make this kind of harassment way less convenient. This wouldn't be trivial to implement, but it would be possible to cover this particular issue.
There's some serious misunderstanding here.
Public thing is public. You cannot "remove" it or whatever. When you open status link in a browser, you're not asked to authorize (it could be like that but that's not how Fediverse works right now and it wouldn't be public).
There's only one way to hide something from someone: make it not public (and to make current level of "public" actually "non public" means breaking the Fediverse).
I see no way to do this. I would encorage others to come up with ideas how to do that (not designs, not code - ideas). Maybe I miss something here and we can all benefit from it.
If instances publish their list of instance suspension, then Mastodon.social could selectively hide users from instances that have marked mastodon.social as suspended in its user search. This could make this kind of harassment way less convenient. This wouldn't be trivial to implement, but it would be possible to cover this particular issue.
This is part of what we have been asking for and been constantly told its impossible while we insisted it wasnt. So you are confirming for us yet again it is in fact possible. Thank you.
I understand it might be difficult to implement, but it would increase end user safety to a point that its well worth the effort.
And if we can publish and share instance suspensions we can also prevent blocked instances from ever seeing any of our posts through their interface just like user level blocks work. Its perfectly doable.
When you open status link in a browser, you're not asked to authorize (it could be like that but that's not how Fediverse works right now and it wouldn't be public).
Thats not what we are asking for. Were asking for it not to be seen by a blocked instance from within their instances interface.
Okay, that's possible. It's not really useful I guess (it's like putting paper on someone's window which they can easily remove) but it is possible at least.
Oh no, it is useful. People routinely do stuff mainly because it's convenient. Remove the convenient way and most people will drop. This is especially useful in harassment cases where numbers count.
We could signal other instances that we are blocking them, but this would:
There are alternatives that are slightly harder to bypass (but still possible unless federating only to hand-picked instances) but they are much, much more involved.
Someone could make code changes to not honor personal blocking, but that didnt stop it from being implemented. Roadblocks and speedbumps reduce harassment, no solution is ever going to be perfect. This doesnt mean you shouldnt try it.
@Laurelai yeah, but when you're blocking only some accounts, you're not assuming the instance as a whole is hostile.
The proposed solution would also explicitly signal those hostile instances that you are blocking them, which might bring more harassment. This is one of the reason that makes me hesitant to try implementing it. (The other is that it might bring a false sense of security—but it's apparently already the case, so…)
Someone could make code changes to not honor personal blocking, but that didnt stop it from being implemented
Because it would require that person to be the instance admin. In this case, it is the instance admin that we are talking about, in a scenario where their maliciousness is a prerequisite, and it would be absolutely trivial for them to simply not respect the list. It is an awful solution because it gives you a false sense of security after doing nothing at all.
“It is an awful solution because it gives you a false sense of security after doing nothing at all.”
I’m glad you understand the problem with the way instance suspension works
@Laurelai yeah, but when you're blocking only some accounts, you're not assuming the instance as a whole is hostile.
The proposed solution would also explicitly signal those hostile instances that you are blocking them, which might bring more harassment. This is one of the reason that makes me hesitant to try implementing it. (The other is that it might bring a false sense of security—but it's apparently already the case, so…)
Most people, even hostile people wouldnt do this. Most instance admins arent coders. It will stop enough people to be worth it.
@Gargron there is some value in blocking an instance with such a mechanism: you could be blocking an instance not because the admin is actually hostile, but because they are bad at moderating their users. I don't know how often that would happen, though.
Wrt. the false sense of security, people already tend to expect that instance blocks go both ways, so…
@Laurelai yeah, but when you're blocking only some accounts, you're not assuming the instance as a whole is hostile.
The proposed solution would also explicitly signal those hostile instances that you are blocking them, which might bring more harassment. This is one of the reason that makes me hesitant to try implementing it. (The other is that it might bring a false sense of security—but it's apparently already the case, so…)Most people, even hostile people wouldnt do this. Most instance admins arent coders. It will stop enough people to be worth it.
It only takes one person doing a fork that doesn't respect it, and other hostile admins using it.
@Laurelai yeah, but when you're blocking only some accounts, you're not assuming the instance as a whole is hostile.
The proposed solution would also explicitly signal those hostile instances that you are blocking them, which might bring more harassment. This is one of the reason that makes me hesitant to try implementing it. (The other is that it might bring a false sense of security—but it's apparently already the case, so…)Most people, even hostile people wouldnt do this. Most instance admins arent coders. It will stop enough people to be worth it.
It only takes one person doing a fork that doesn't respect it, and other hostile admins using it.
So why bother having personal blocks or suspensions at all, why bother having content warnings or any safety feature then. By that logic someone could fork mastodon without any of those and propagate it among shitty admins. Sure they could. You cant stop them from trying. But thats not a reason to add better features that protect users.
@Laurelai personal blocks is the only way to block someone from an instance while allowing other people to follow you, and that assumes the instance admin is not hostile. Content warnings have absolutely nothing to do with any of this.
That doesn't mean we shouldn't implement it, but it means it's effects are going to be very limited.
And as I said earlier, it has an adverse effect: notifying hostile instances that you are blocking them.
@Laurelai Well, we can certainly take a look at what you come up with in your fork. I'm always on the lookout for good changes to port upstream.
Believe me Laurelai, I'd rather not have to notify you in the event I set up a new instance and have you go on about me for a week or whatever you do (and did last time) just so I can ensure instance blocking starrevolution works fully, but there isn't an easy solution to "allow open federation" and "but prevent some people from getting the statuses" and as someone who has been trying to learn enough about the underlying protocol to make my own proposals as well as seek out people with actual sound ideas and solutions to the problem, it involves a lot more than yelling at a developer about it. Publishing block lists creates more problems than it solves, so your proposed solution is both half-baked and ineffective, and would lead to more drama and bad-faith behavior than exists now while being trivial to circumvent.
If you do come up with a good solution, maybe a cryptographic token scheme or something that still allows easy open federation but prevents blocked instances from decrypting the blocking instances' content, by all means we'd love to hear it, maybe you can do a proof of concept on that fork you're working on and let us all know how it works out.
@Laurelai Well, we can certainly take a look at what you come up with in your fork. I'm always on the lookout for good changes to port upstream.
I too demand people fix the product im making
Believe me Laurelai, I'd rather not have to notify you in the event I set up a new instance and have you go on about me for a week or whatever you do (and did last time) just so I can ensure instance blocking starrevolution works fully, but there isn't an easy solution to "allow open federation" and "but prevent some people from getting the statuses" and as someone who has been trying to learn enough about the underlying protocol to make my own proposals as well as seek out people with actual sound ideas and solutions to the problem, it involves a lot more than yelling at a developer about it. Publishing block lists creates more problems than it solves, so your proposed solution is both half-baked and ineffective, and would lead to more drama and bad-faith behavior than exists now while being trivial to circumvent.
If you do come up with a good solution, maybe a cryptographic token scheme or something that still allows easy open federation but prevents blocked instances from decrypting the blocking instances' content, by all means we'd love to hear it, maybe you can do a proof of concept on that fork you're working on and let us all know how it works out.
Weird because you are one of the harassers i was talking about when it comes to instance blocks not being good enough and people rallying third party ones into said harassment.
Well, you and the old bofa instance and the instances those people moved to.
Plus yaknow if you dont want me complaining about you harassing me, dont harass me.
And as I said earlier, it has an adverse effect: notifying hostile instances that you are blocking them.
Its already arbitrarily easy for them to know they have been blocked so it seems like a non-issue.
I mean considering I want nothing to do with you and have extremely blocked your instance to the point of it being the one exception on the "experimental instance with no instance blocks" instance, while you harass me and my family on twitter and pin threads making wild accusations about me, I'm not sure who gets to call who the harasser here.
I mean considering I want nothing to do with you
And yet here you are.
So you gonna submit that patch yet?
I too demand people fix the product im making
Could you please clarify the so-called "fork off" movement you organized, because as I understood, it was centered around taking over my work as a developer, so you would be the one making the "product" in an independent fork. Under those circumstances, it does not seem outwordly to expect that you would be implementing the features you want without my participation. :thinking:
I too demand people fix the product im making
Could you please clarify the so-called "fork off" movement you organized, because as I understood, it was centered around taking over my work as a developer, so _you_ would be the one making the "product" in an independent fork. Under those circumstances, it does not seem outwordly to expect that you would be implementing the features you want without my participation. 🤔
I didnt organize it. Your former community manager did. Which says a lot about you by the way. I just made some toots supporting the idea. In fact i agreed not to have any positions within the organization for at least a year. You are barking up the wrong tree.
I know you tend to like to be in leadership positions otherwise you refuse to participate due to how difficult it is to control members bringing up your past misdeeds, but maybe you could consider participating in the "fork off" movement without being a leader for once.
I know you tend to like to be in leadership positions otherwise you refuse to participate due to how difficult it is to control members bringing up your past misdeeds, but maybe you could consider participating in the "fork off" movement without being a leader for once.
All i did was toot support for them and give them advice on how not to wind up a dictatorship then bowed out to let them do their thing. If i had participated people like you would be screaming at the top of their lungs that everyone involved is canceled because i was allowed to participate.
You dont know me Anna, you never did. Now leave me alone.
Damn straight and I never want to, what I know about you is too much as it is. Stop harassing Eugen and go away unless you have some actual solutions to bring to the table.
Ok, so, ultimately, you won't be able to prevent people from copying and passing around your public toots no matter what. That being said, there are things we can do to prevent, or at least, make it harder for them to federate to blocked instances.
I can see two ways of handling that:
This is the solution we've been talking about. Basically, we tell the blocked instance that we are blocking them, so that they can help us enforce the block. This is relatively easy to do, but extremely easy to bypass. The main drawback besides the very relative effectiveness of this solution is that if we go this way, we're going to tell potentially hostile instances that we are blocking them, thus potentially making us a target.
This approach still has value, though. For instance, consider very big instances with poor moderation, but whose admins are not interested in actively being assholes. Such instances can still be an issue to others, and the blocking mechanism discussed here would help.
Technically, this is easy to implement, the hardest part will probably be to agree on a way to represent this blocking mechanism and convince other implementors.
Failure from other instances/software to implement this would result in ineffective blocks.
Another approach is to authenticate every fetch, so that fetches from blocked instances can be denied. This does not require explicitly notifying the blocked instance that they are blocked, although they could still figure out themselves by manually comparing what they can see on public profiles and what they can fetch from their instance.
This is much more difficult to implement, and failure from other instances/software to implement this would result in unfetchable/unboostable public toots, probably increased workload and other federation issues.
This is also easily bypassable, but not as easily as the previously discussed solution. Furthermore, if whitelist-based federation is implemented, this could be an effective solution.
EDIT: The main thing we need to do before implementing that second solution is to add support for special actors which represent instances, then sign every fetch with it (unless fetching on behalf of a specific user, as we sometimes already do). This is a lot of work, but such a special actor could also be used for other features, such as subscribing to a friendly instance (a bit like relays).
EDIT2: That second solution also requires stopping LDSigning toots, which will prevent reply-forwarding (or, alternatively, reply-forwarding should be implemented differently, less efficiently)
Enough of this off-topic personal squabbling.
I am in favor of this change, even if its effects would mostly be symbolic. Had this been implemented early in Masto's popularity, it would have had greater effects as folks upgraded to new versions of Masto with this in place and malicious admins would either have to patch it out, sick to old Masto editions, or use GNU Social.
However, alternate fediverse software has both gained in popularity and it more strongly associated with the "bad half" of the fediverse, so those instances would all continue to ignore the blocked_by.txt
file.
It looks like a suggested mechanism of letting instances know they have been blocked has been posted in the time I composed this message, so I won't ask for further clarification there.
The biggest failure case I can see is the "chan of boosts" scenario:
The bigger problem is that while you can set up a system that authenticates fetches, activitypub is... leaky, and let's say I have Laurelai's instance blocked (because I do) but she doesn't have mine blocked (this part is thankfully fictive) and I reply to a user on knzk, which neither of us have blocked. When Laurelai's instance loads the thread from knzk, knzk will reply with the contents of the thread, including my reply, and as a matter of course, slurp it into the database. I am to believe this is how pretty much all of my posts are on various instances that I would prefer my posts not be on, and I have blocked. (Although I'm not even sure if this is entirely it, as I could load posts from my instance on _counter.social_ of all places by URL despite having it blocked, and that instance doesn't even federate correctly at all--I'm pretty sure instance blocks at this time do not reject incoming requests from blocked instances, not to mention public-facing IP and outgoing request IP might be different due to using cloudfare, etc.)
This is where I think for a system to work it would need to be a system that only transmits encrypted statuses between instances that can only be unlocked by requesting a key directly from an instance as a particular instance. It would also have to be backwards compatible with existing instances that only use LDSIG which would be the hardest part of this; it wouldn't really be secure until the majority if not most instances of software that use ActivityPub implement this scheme. I am not a cryptography expert so I do not have a proposal for how to do this at the lowest levels, but as far as I can tell this kind of method in general is about the only way to do it that isn't trivial to bypass and still doesn't break open federation.
E: @PubliqPhirm for tut-tutting "personal squabbles" you sure are quick to badmouth "the bad half" of fediverse software for petty reasons. A Pleroma developer is so far the only one who's taken the time to sit down and explain this to me and also show me some of the ideas and solutions they proposed to solve this issue. Nobody spending their time and effort making fediverse software is trying to make the fediverse a worse or more dangerous place.
@witcheslive I believe the second solution in my previous message does solve the case you are talking about, without involving more crypto. However, I forgot about another requirement: you have to not LDSign messages, which means some features such as reply forwarding won't work (or will have to be implemented in different, less efficient ways)
@ThibG this gets into the thick of things that I admittedly only have a surface level (if that) understanding of, but does LDSign prevent a system where instances pre-negotiate keys to decrypt each others' statuses so they can otherwise be served up as they are now, but can only be read and thus stored in the database if they can be decrypted with said pre-negotiated key?
While this would be a pretty big challenge to get this adopted by enough of the fediverse and server software to where it's actually useful, I'm not convinced there's a way to do this that doesn't eventually require breaking changes or server admins having to chose when to only federate with instances that support this system.
Encryption is yet another layer of complexity, and I don't think it brings anything to the second proposal.
Sorry if we're talking about slightly different scopes here, I'm generally working from the assumption that we're never going to solve the right-click-open-in-private-window problem as the only real solution to that is a locked account and followers-only posts, unfortunately. Ultimately this is an open federated network and there's no way to prevent public statuses from being public.
However, I think a problem that can be--and must be--solved is that blocked instances can see and store statuses and really entire profiles and post histories of accounts on instances that block them. It should be extremely nontrivial if not downright impossible for a blocked instance to see and thus store posts on an instance that blocks it.
If you pre-negotiate keys that means you have a finite number of instances to negotiate keys with. A whitelist. The solution I proposed earlier does not need additional encryption to be effective if there is a whitelist: if an instance tries to fetch a toot, it has to authenticate as a whitelisted instance.
@thibg one minor addition: you've listed
1) asking instances nicely to respect filtering policies that are not theirs (which imo is an explosion in complexity and also easily ignorable if unimplemented)
2) authenticate every fetch (specifically you propose an instance actor that fetches and signs public content?)
with regards to point 2, correct me if i'm wrong, but wouldn't this require / more easily be done by instead fetching directly from the authoring instance? or making the authoring instance more responsible for its own content? since the problem is that content leaks via mutually-linked instances, then the only solution i can see is to stop forwarding content from other instances
fetching public content directly from instances should be blocked by firewalls, even if it's otherwise unauthenticated. of course, caveat being that an instance could switch IP, but this is a much higher barrier than the current one (unauthed fetch from a mutual unblocked instance).
going one step further, it seems like centralizing interactions around the instance/domain level is really what a lot of people seem to be indirectly asking for, given that a fully promiscuous open-world federation is not the goal of everyone -- interoperability is more important. as it stands, requests like whitelisting, disabling interactions, etc. are all untenable because of the promiscuity of activitypub objects within mastodon's current implementation. the authority on local objects is never fully within the local domain.
i don't think federation is fundamentally flawed, but it needs to be considered to what extent certain parts of the model should be centralized/decentralized. if mastodon was purely on the user level, with instances acting only as stores of content and not providing a community experience, then it would make more sense to push in a decentralized direction. but as mastodon is used with the expectation of community, that creates a level of centralization that creates a fundamental contradiction in authority.
postscript: i feel like the current tension within this issue is also perhaps due to the magnitude of the request -- it is not at all a trivial thing, and will require massive rewrites of mastodon and rearchitecturing, in a way that will break all currently existing deployments of software built on the assumption of forwarding and relays. it's ok to ask for that but do realize it is not a minor request.
@trwnh yeah, that's what I omitted in this comment (I'll edit it) but brought up later: stop LDSigning, so that content has to be fetched (with an authenticated fetch) from the originating instance, which can then decide to block
@ThibG if it's about LD-sigs then maybe the best approach would be to create a new "Logged-in" privacy option that does what you said. And then people concerned about the promiscuity of their posts could use this new "mostly public but require auth" post privacy. requiring auth would solve the "right click private window" as well.
@trwnh not LDSigning is just part of it, you also have to authenticate fetches, which is the part that requires a lot of work, and from every implementor
EDIT: But yes, this could be a per-user option
well, someone has to do it first, i guess. i think it's fine to have those statuses unfetchable by noncompliant software. that probably is what people are expecting anyway
If you pre-negotiate keys that means you have a finite number of instances to negotiate keys with. A whitelist. The solution I proposed earlier does not need additional encryption to be effective if there is a whitelist: if an instance tries to fetch a toot, it has to authenticate as a whitelisted instance.
Sorry if I'm not articulating properly. This wouldn't necessarily be a manual process, as it is instances know which other instances they have seen and federate with, but every so-often (and this would probably be configurable by instance) instances would have to check in with each other to renew their keys. This is where instances could check to ensure the requesting instance isn't on the defederation list and deny a renewed key. There would be associated mechanisms for re-keying early or revoking tokens or whatever it might be called (usually triggered automatically when an instance is added to the blocklist) and all that good stuff
Edit: I think this is where I'm thinking might be a little less breaking as it could maybe be layered on top of the existing promiscuous implementation but with this added layer that the instances involved have to consent by way of key exchange in order to read statuses obtained promiscuously. This could also be phased in over time, with instances allowed to chose whether or not they publish without this added layer depending on when they feel comfortable cutting off instances that have yet to update with this model.
@witcheslive alright, but I really don't think the encryption brings anything here. With the second proposal, the authoring instance decides where toots go, since they aren't LDSigned, they aren't forwarded anywhere, and if an instance is interested in fetching them, they have to authenticate, so can be checked against the blocklist. The way to bypass this is to spin up a “fake” instance, but this is also possible with your scheme. And can be avoided when using a whitelist.
The biggest technical difficulty is to figure out what the actor representing an instance should be, imo.
Ultimately when you're publishing public statuses there's going to be some way around it, but as it's much more effort to circumvent with something like this--nor would it be impossible to figure out if a mysteriously blank instance is requesting keys, there'd be counterplay here--it's really all I'm looking for, which is removing the lazy passive surveillance that comes out-of-the-box with any existing fediverse software.
So is this going to be reopened or will @Gargron just open a new issue or PR and claim divine inspiration even though people here have suggested implementation methods?
So is this going to be reopened or will @Gargron just open a new issue or PR and claim divine inspiration even though people here have suggested implementation methods?
We both know the answer to this.
Perhaps the issue is that Mastodon is not the software solution to meet the general needs expressed here. It sounds like a more centralized, closed solution would better fit this unique use case. Perhaps a password protected forum software like Discourse or phpBB. Another solution could be BuddyPress which is a Wordpress plugin that allows users to have profiles, upvotes, and send direct messages while retaining the publication features of Wordpress. That also allows passwords and you can share the RSS feeds with your friends.
Might the lists of blacklists be shared (or stripped away) via relays? I am not sure if relays add significant "leakage" or if federation without relays is leaky enough that relays do not add significantly more leakage to the system.
@PubliqPhirm I am to understand relays don't add much unique to the leakiness other than it is somewhat equivalent to sending highly pressurized water down a leaky pipe, causing it to leak significantly more, but it is still fundamentally the same leak.
@witcheslive it's less that activitypub is a leaky pipe and more a question of what "public" fundamentally means. the pipe isn't leaky -- it's secured both ways. but anyone can buid an equally secure pipe downstream.
@PubliqPhirm relays aren't any more or less leaky, the question is what is flowing through the pipe in the first place, and the expectations around it. this is why broadcasting which instances block which is an explosion in complexity.
Say Alice gives Bob a message that Alice indicates is "public". so Bob assumes they can pass the message onto anyone who asks, right? because that's what "public" means. but actually, Alice dislikes Eve and Mal. so now Bob has to keep track of that, but Bob has no idea who's asking in the first place -- public means public, so Bob didn't lock the message in a box, they put it on a billboard in their town. Eve and Mal are banned from entering Alice's town so they can't see any billboards there, but they can freely visit Bob's town and read the copied billboard without telling Bob who they are.
Now expand this to thousands of instances, many mutual with each other, some further down the chain. how can you propagate thousands of different policies to thousands of different nodes efficiently? it's much easier for Bob to give up and say "look, i have this message from Alice but i don't want to keep track of who Alice hates this week, so go to Alice's town and read it there. it's public to anyone who can visit Alice's town. i'm not responsible for this message, if Alice wants to ban certain people then Alice can decide for themself at the time of the request."
downside being that now Alice has a lot of traffic to Alice's town and must deal with that increased traffic. this is simply the tradeoff to be made if Alice wants to fully control the flow of data (assuming no bad actors).
This is why I suggest Alice give Bob an encrypted message and gives Bob a key that only he can use to unlock it and any other message of hers that Bob might come across, then similarly hands out keys to whomever asks to read her messages but then also she can deny keys to whomever she hates this week. So if Bob gives Steve, Grymlyl, and Euler the same message they Alice gave Bob, they will all need to have a key from Alice to read it even if Bob gave it to them. Steve got a key a couple of days ago, Grymlyl has never heard of Alice's instance so they ask for a key by initiating an automated federation request, and Alice doesn't like Euler this week so Euler doesn't get a key and is thus unable to see the message.
that would require changing the keys every week, though. it doesn't really
make sense to claim something is "public" if it has to be encrypted and
invalidated regularly. if it was a house, it makes more sense for alice to
keep their own keys and ask people to knock on their door, instead of
giving out new keys every week to thousands of people.
Computers are fast nowadays. I don't think checking in with an instance for a handshake at any period of more than a couple of days is going to be an undue burden even for the raspberry pi instance. My rather well-federated instance knows of about 3400 instances, if the period was one week it would have to do a handshake/key exchange once every roughly 3 minutes. The process shouldn't take more processing than I don't know, processing 3 incoming statuses? Probably (much) less? Posting a single status is a much more expensive operation than this would be.
"computers are fast" isn't really a reason to use a less efficient and more complicated scheme. a status only has to be fetched and cached once, not repeatedly fetched regularly and forever. in practice, most public statuses are going to be delivered to followers, and random drive-by fetches are much less frequent. of the 3400 instances yours is aware of, how many have at least one follower subscription on them?
Did I make what I'm proposing sufficiently clear? This would not be on a per-status basis, it would just require servers to establish a formal relationship in order to decrypt each others' statuses. The overhead would be in storing an encrypted version of the statuses as well as a plaintext version, or decrypting the status per retrieval, depending on implementation (probably the former as that seems much cheaper)
Of course whatever we come up with is going to have additional overhead and cost, the question is more about how this might be done without it being an undue amount. Dismissing an idea based on assumptions of additional overhead when a reference implementation hasn't even been designed, nevermind benchmarked, seems a bit premature. Also keep in mind that what I'm spitballing here is much more efficient than some of the other more basic ideas, such as requiring every status to be retrieved from the instance it came from, and is an attempt to keep the performance benefits of activitypub's promiscuity intact while still being much more secure and respecting the consent of the instance administrators involved.
I mean admittedly I don't have the full thing mapped out myself, there's a reason I haven't submitted a proposal yet and that's because while I've been thinking about this quite a bit, I don't have any nuts and bolts for a formal proposal on how to actually accomplish this. I still need to learn basic things like what kind of encryption model can even do what I want to do, or "how LDSign actually works" so I can be more helpful than throwing out broad strokes. As such I'm more interested in why the general idea is good or bad, not dismissal based on implementation details that nobody has proposed yet.
going back to the pipe analogy: it is simpler to ask people to build direct
pipes, rather than trying to prevent stuff from flowing downhill. i'm not
trying to dismiss anything, just walking through it theoretically before
anyone puts in any implementation effort.
if we draw a comparison to email: say you wanted to establish a mailing
list that people can subscribe to. instead of forcing everyone to
resubscribe every week, you can instead handle subscriptions and delivery
centrally.
or even more directly: activitypub is a web protocol. so handle it like a
web request: go directly to the source instead of a cached copy. this is
really no different than linking to an archive.org link instead of the
original page. the firewall policies of your server have no bearing on
archive.org's firewall policies.
i guess with regards to encryption, designing a complicated scheme is unnecessary when the simpler thing to do is to stop making them infinitely relayable. LD signatures, or linked data signatures, are what allow objects to be passed around with refetching from source. basically, "i've signed off on this to prove it came from me, and that its contents are valid." if the signature is invalid or missing then you have to go get it from the source. and the source can then allow or deny as it pleases.
you can sign stuff during transport, but decryption happens once. if you want a remote domain to reauthenticate, you can assign one key per transport and then revoke that key once that remote domain is blocked.
OK then what I'm proposing isn't really a "complicated scheme" it's adding an additional aspect to the LDSig, or another layer on it. Specific details might vary, again I am not very well versed in the nuts and bolts here, but given what you said I am thinking along the lines of keeping LDSig as it is, except instead of signing the plain text (or data) of the payload, it signs an encrypted version of it. This encrypted version can be forwarded, disseminated, passed around, go through leaky pipes, have all the nifty conveniences of how ActivityPub works now as far as transport and dissemination of ActivityPub messages.
The encryption would require a type of token or revocable key or something (again, I have not gotten to where I can research details on what method would be best for this) to decrypt that must be obtained directly from the instance the status is from. Obtaining that key is a fairly inexpensive handshake/key exchange process. This would have to be renegotiated periodically, of course, as the keys/tokens would expire after a period of time and could be invalidated, a process that would happen if, say, an instance was added to the block list. This would have the added benefit of it being very easy to implement whitelist federation for closer-knit or semi-private social networks.
Other features that would be nice to have if feasible would be a cryptographic watermark on decrypted statuses to know what key was used to decrypt it as a protection against using keys obtained dishonestly.
Ok, I've seen some confusion in a few earlier messages, and I realize how a Mastodon instance can know about a toot is not clear for everyone. In this comment, I'll try to give some more background on what is an instance, how Mastodon instances get toots, and discuss some solutions
Unless I'm forgetting anything, there are three possibilities:
Mastodon considers that a remote instance is a domain name that is shared by multiple accounts.
For most purposes, that works, but unfortunately, the concept of instances is an implementation detail and not actually part of ActivityPub at all, so signaling between instances (and not accounts) has no basis in the spec and is yet to design and specify. This includes the “asking nicely” approach and the “key negotiation” part of the “encryption” approach below.
Currently, blocking an instance in Mastodon discards any activity from the blocked instances, and never sends anything to them, which means they never get them through 1., but they can still get them from 2. and 3.
Asking nicely the blocked instance to discard everything coming from your instance does not actually change anything to how that blocked instance can know about your toots, it merely ask them to not do anything with it.
The good things with that approach are that it's easy to implement if we find a good way to represent that information, and that it does not affect performances or features in any way.
The bad things are that you actually notify hostile instances that you are blocking them, and that you rely on them actually honoring your request to discard your activities.
This means this is probably a good solution for blocking instances whose admins are not actually hostile towards you, but whose moderation is lacking, but it is probably not a good solution for blocking instances whose admins are actively hostile to you.
We could easily “solve” 2. by not LDSigning it, those losing reply forwarding and relays. For reference, Pleroma does not LDSign nor forward replies. They do have relays, but they work less efficiently from a protocol perspective, as they require the receiving instance to fetch (so, method 3.) every unknown toot from their originating instance, which increases workload for everyone.
The only way of “fixing” 3. is by requiring every fetch to be authenticated, thus allowing the originating instance to make a decision who gets access to the toots. But that requires changes in every implementation, changes that would slightly increase the workload of the instance fetching the toot, and very significantly increase the workload of the instance hosting the toot each time there is an attempt to fetch a toot. Furthermore, unless there is a whitelist of instances allowed to see such a public toot, an hostile instance could still spin up a “fake” unblocked instance that would fetch the toots even if the “real” instance is blocked. And unfortunately, it's very cheap and easy to do (but not as much as bypassing the “asking nicely” approach, though)
If I understand it correctly, @witcheslive's proposal is to encrypt every toot, and then control who gets the key.
This would not change how toots can be forwarded, relayed or fetched, but those toots would be encrypted, requiring every remote instance to have the proper key.
The main issue I see with this solution, in addition to the additional layer of complexity, and the fact that it would completely break federation with any instance not implementing this, is that we would need to change keys everytime the list of blocked instances changes.
Also, this shares some of the issues with “authenticating every fetch”, that is, unless using a whitelist-based federation model, an hostile instance could still spin up a “fake” unblocked instance as easily to get a decryption key.
I have seen another proposal which is to nicely ask instances you do not block to not forward toots to instances you do block. This leaks the list of instances you block, this is significantly more complex and costly than asking the blocked instance, relies on every not-blocked instance to honor your request, and does only address not forwarding toots to blocked instances.
The same can be achieved much more easily and reliably (but at the cost of reply forwarding) by simply not LDSigning toots.
Something that can be done right now without changes to remote instances is basically the “Authenticating every fetch”, but trying to guess which instances a request comes from when it is not signed.
I can think of two ways of guessing from which instance a request is coming from:
For both of those options, you would have to disable any caching of public toots as well.
It's hard to come up with a way to do this that doesn't break something somehow, but ideally what I'm proposing would have some level of control over it. Your federation panel could give an overview of what instances are compatible with this encrypted mode, and options could be added to allow an admin (and perhaps maybe even individual users) to determine what level of comfort they have federating with instances that are still using OStatus or are not running with compatibility for encrypted statuses yet.
Come to think of it this may work better as something kind of like OMEMO in XMPP, in that both parties have to have it switched on and configured correctly for it to work properly, and could even be a per-status option. This could potentially reduce a lot of overhead as many users who do not care about having this security wouldn't even bother switching it on. To even further think about it, maybe the original goal here could be achieved by working on adding support for the oft-wished-for end to end encryption?
I'm also surprised that there's next to no concept of the server/instance in ActivityPub, it seems like having this concept could offer solutions, or better solutions, to a lot of problems, not just this one. Might this be something worth proposing an update to the spec for?
E2e and public is not compatible. If everyone knows the key, it might as well be not encrypted, the only difference is CPU cycles. That's the same as "asking nicely".
Spec defines actors and each actor has inbox and outbox. In the vision of some of the spec authors, there are very small instances, not what we have now. There's one thing which is server related and it is "shared inbox" for the whole server.
As a client developer I would ask about only one thing for e2e and it some way to nicely share public keys (cam be done now but awkardly). Otherwise e2e is not a server concern nor it should be. And it still assumes fixed number of recipients.
What would be nice for this discussion is comparison with other federation protocols and how they solve this (if they do).
Diaspora solves this by simply having the original server handle all replies/comments and redistribute them itself. That means that you can easily block bad actor nodes and they can't fetch it from elsewhere. The only question is whether this trait is desirable for "public" posts. As it seems people continually request things like disabling replies and preventing reply-fetching from mutual sources, it really does seem like the most direct and straightforward way to adhere to people's expectations for what a social network should do for them. Mailing lists are sent out via one server, mass SMS texts are sent out from one number, etc. etc. -- there's a lot of precedent for this distribution strategy in the way email/SMS/web/etc works.
Inbox forwarding: https://www.w3.org/TR/activitypub/#inbox-forwarding
The following section is to mitigate the “ghost replies” problem which occasionally causes problems on federated networks. This problem is best demonstrated with an example.
Alyssa makes a post about her having successfully presented a paper at a conference and sends it to her followers collection, which includes her friend Ben. Ben replies to Alyssa’s message congratulating her and includes her followers collection on the recipients. However, Ben has no access to see the members of Alyssa’s followers collection, so his server does not forward his messages to their inbox. Without the following mechanism, if Alyssa were then to reply to Ben, her followers would see Alyssa replying to Ben without having ever seen Ben interacting. This would be very confusing!
Per Dennis Schubert: https://schub.io/blog/2018/02/01/activitypub-one-protocol-to-rule-them-all.html
Yes, indeed, that would be very confusing.
Let us have a look at the implementation in the diaspora* protocol first. We have a pretty easy rule: Whenever Bob interacts to something Alice shared, that interaction will be sent to the Alice’s host and the Alice’s host alone. [...] Alice’s host is the one who delivered the shareable, so it feels somewhat natural to also ask Alice’s host to distribute the interactions.
[...] The way of sending interactions outlined by ActivityPub does not solve the ghost reply issue. If anything, it creates more complex and confusing edge-cases, since some interactions will be forwarded, while others will not.
As per the ActivityStreams spec, all objects have a
replies
property. So, a more sensible, reliable, and even more AcitivityStream’y way of handling replies would probably be adding the interaction to the replies collection and sending an update.
I think it might actually be more helpful to consider relayability as an added/optional trait of messages that should be opted in/out, rather than the default of how all messages are treated. Relayability would be mutually exclusive with access control, unless you add encryption, but then you need to design it in a way where the decryption key isn't relayable, and you also need a way to revoke decryption keys which would mean cycling out all encrypted content periodically... at that point, you may as well design a fully decentralized system where everything is publicly replicated in an encrypted form, a la the Spritely / Golem demo: https://gitlab.com/spritely/golem
I've read the whole thread and I may be wrong or miss something, but since we're talking specifically about domain blocking here, maybe there is a way that is not too resources hungry or complex, based on the "authenticate every fetch" approach.
One way to do that would be to have an instance level actor, for each instance, and to authenticate fetches with this actor. Then, if the receiving instance has blocked this domain, simply refuse the fetch?
We have instance actors in Funkwhale, and advertise the information in our Nodeinfo enpoint. Exemple:
{
"version": "2.0",
"software": {
"name": "funkwhale",
"version": "0.18.2+git.e243c792"
},
"protocols": [
"activitypub"
],
"usage": {
"users": {
"total": 16,
"activeHalfyear": 1,
"activeMonth": 1
}
},
"metadata": {
"actorId": "https://demo.funkwhale.audio/federation/actors/service",
}
}
The actorId
thing tells anyone fetching the nodeinfo data that the instance will authenticate its fetch with that actor.
Then, from time to time (and the first time we interact with a domain), we fetch its nodeinfo and actor, if any.
Then, when an instance is blocked, you can drop any request authenticated with its actor.
What do you think about it?
However, alternate fediverse software has both gained in popularity and it more strongly associated with the "bad half" of the fediverse, so those instances would all continue to ignore the
blocked_by.txt
file.
FWIW, Pleroma already reports blocks and other stuff in the nodeinfo. See https://catgirl.science/misc/nodeinfo.lua?kawen.space
@EliotBerriot this would be one implementation of what I was suggesting with the “authenticating fetches” proposal. This does still requires not forwarding replies, and it still increases workload when remote instances fetch public toots since those cannot be cached.
I've said it before and I'll say it again: LDSigs and forwarding reply objects in general are an optimization that is really not necessary.
You can either have real security, with real enforcement, or you can build all of these systems in the name of alleged performance gains (I'm not convinced) and have leaks all over the place.
Pleroma federates just fine with Mastodon and is built from the former philosophy. I believe that Mastodon can change it's posture in the same way.
Regarding using an OMEMO-type technique, this is actually a horrible idea because it either obliterates deniability or it obliterates scalability. Mastodon keeps optimizing in a way which harms deniability but helps scalability; do you really think they would implement an OMEMO-style technique in a different way?
Side note re: authenticating all fetches: It makes using proxy cache impossible (i.e. nginx, varnish, or cloudflare), which means all GET requests must necessarily hit Rails. It must be noted that at the moment, proxy caching provides massive value in handling traffic, especially for small servers.
@kaniini LDSigs are basically used for reply forwarding (that Pleroma doesn't do at all, last time I checked, which is annoying, it's a somewhat missing feature, not a performance issue) and forwarding Delete
around (which Pleroma doesn't do either, and which may be an issue). If Pleroma did reply forwarding without LDSigs, it would cause even more pressure on servers due to the increased number of fetches.
And, hosting a single-user instance on a small ARM box, performances are an issue when serving toots and getting hammered by dozens of instances resolving unknown toots causes noticeable slowdowns. And that's with a bit of caching, which we are suggesting to disable in most of the discussed solutions. (I'll admit that pleroma would probably perform much better, but it's still an actual performance issue)
I agree that OMEMO-types techniques for public-ish toots is an horrible idea. It would be cool for DMs and maybe even follower-only, but that's not the topic at hand.
At any rate, if it wasn't obvious from my previous post, I believe this is the correct solution:
Require all AS2 object fetches to be authenticated by default: the AP spec likely agrees with my view because they have proxyUrl
for clients. Admins who do not care can turn off the authentication requirement (for caching).
Never ever relay messages you are not directly responsible for.
If we can agree on this, we can plug all of the leaks, which would resolve the bug.
BUT, it is important as @EliotBerriot observes, to use an instance actor to authenticate the fetches, as the instance is not working as a user agent in this role, but as the instance itself. Fetching AS2 objects with a signature from a user actor is extremely inappropriate.
However, none of this will ever come to pass because security takes a back seat to scalability.
@kaniini I am really concerned about performances if we disallow caching. But we should really add support for instance actors, and sign fetches with it (unless using a more specific identity, when fetching on behalf of a user, but that's a corner case). Disabling caching and enforcing authentication of fetches could then be opt-in, rolled out if it does not harm performances too much, etc.
(Also, instance-wide actors would be useful for relay-like features and other stuff)
FWIW, you don't disallow caching: you change the overall caching strategy. Pleroma does do caching, but it does caching in a way where security is still the primary focus, that's why we use tools like Cachex instead of having nginx cache AS2 objects, etc.
I am just really frustrated with scalability being used to justify bad security praxis which, as has been demonstrated in this thread, results in harm to fediverse users.
@kaniini well, we do some caching at the Rails level too, but Rails is no Elixir :weary:
If it's expected that status will be fetched multiple times in a short time range (for example if it is not signed and it is just posted), caching it in memory would make tons of sense. I'm not a fan of this type of solution though because it is not very energy effective nor very resilient to instances going down).
@ThibG yes, this whould have a performance impact.
But this does not have to be as bad as what you suggest. I mean, that's basically one DB request per fetch with a hot cache (to check the signature based on actor data), assuming you have short-lived caching (a few minutes) of the object representation based on its ID to avoid querying / reserializing the whole thing. That should be quite sufficient in common scenarios (fetching of toots by a bunch of instances at the same time). Even in Ruby/Rails, that should be pretty fast.
With a cold cache (unknown actor, uncached object), yes, it's heavier because you have to fetch the actor key, then query / serialize the object from the database. But this scenario will only happen when federating with new instances or with objects that are not requested often.
Diaspora solves this by simply having the original server handle all replies/comments and redistribute them itself. That means that you can easily block bad actor nodes and they can't fetch it from elsewhere
i'm calling BS on the "easily" here. How do you block bad actor nodes on diaspora, by IP? What happens when someone gets the bright idea to install a proxy?
The first three google results I see all claim that there is no way to block/mute bad actor nodes on diaspora short of an OS-level firewall.
With multiple people discussing how to actually accomplish some semblance of actual domain blocking, will this be reopened?
i'm calling BS on the "easily" here. How do you block bad actor nodes on diaspora, by IP? What happens when someone gets the bright idea to install a proxy?
It's certainly not fool-proof or non-circumventable, but the general idea is that all requests flow through the originating node, so you are free to have your own authentication scheme for access control. you are correct that diaspora has no pod-based access control except firewalling -- the diaspora protocol generally works on a user level, so you would have to suspend every single user from a bad pod if you didn't want to use a firewall. diaspora as a project seems to be ideologically in the camp of disallowing defederation from within the software, and this has become an extremely contentious point in the past few years because numerous bad actor pods have sprung up in that time. There also exist proxy servers which discard incoming content from blacklisted pods, and prevent delivery to them. So to clarify/restate: easy in theory, not in practice.
Is the discussion that @witcheslive started intentionally focusing on the stuff-pushed-through-AP case and ignoring the pop-open-a-tab-linking-directly-to-public-post-HTML case (as just general harm reduction to avoid bulk collection of statuses by blocked instances)? Or are we talking about literally scrubbing the frontend of public statuses, such that they won't even be _visible_ on HTML pages?
i don't think it's a requirement to generate an html document for every post -- even for "public" ones. this would especially be true if you created a privacy level that required users to authenticate before viewing the page. instead of serving up the page directly, mastodon could ask users to log in or oauth before viewing the resource. mastodon could even implement openwebauth so that authenticating across sites becomes automatic. it would then be the responsibility of the mastodon server to compare that authenticated viewer with the list of accounts not allowed to view the post, and serve the appropriate result. that would actually patch the "pop-open-a-tab-linking-directly-to-public-post-HTML case" for posts that had this "require authentication" privacy level.
yeah, i agree that there could be a specific, opt-in privacy level of "only logged-in people can see" which only pushes those statuses over AP. i don't think that _should_ be the default, because it would break the expectations of 99.9% of fediverse inhabitants (ie, "I can link to my public status and give that link to someone who is not on fedi and they can see it")
yeah auth-required maybe shouldn't be default, but "public" should warn/remind users that it means blocked users can see your posts if they log out or if their server fetches it.
I'm replying from e-mail, so I don't have the issue number handy, but one of the proposals I made for a privacy overhaul last year was this exact suggestion for a new definition of public that splits into a ”true public” that implies that you don't care if blocked users or search engines see it and a different setting that means it is available to anyone logged in and not previously blocked. If I remember, I'll look up the issue number and link it the next time I'm at my computer. If one of you finds the issue and links it before I do, thank you in advance.
The Public Clop Accountant
(N.B. Any claims about being an accountant or an LLC are purely for entertainment value)
On Mar 22, 2019, 12:48 AM -0400, trwnh notifications@github.com, wrote:
yeah auth-required maybe shouldn't be default, but "public" should warn/remind users that it means blocked users can see your posts if they log out or if their server fetches it.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Most helpful comment
"Blocks can be circumvented" is a piss-poor reason to forgo properly implement instance blocking.
Telling others to "do it yourself" is the height of arrogance.
Mastodon users deserve someone better.