This is not an alternative (replacement) to OCap. It can be used together with OCap, or standalone.
A lot of privacy-preserving things aim to create noise and obfuscation. For example, AdNauseam and TrackMeNot, as well as, arguably, Tor (but I couldn't find anything talking about how Tor does this - altho it'd involve running an exit node) and I2P (which, again, I couldn't find anything about - but wouldn't need an "exit node").
I feel like mastodon could benefit from doing similar with user posts and blocked instances. Rather than sending them real posts, send them "Your instance (foo) has been blocked by bar. Consider switching instances or contacting your admins.", either as a replacement for real posts (i.e. same ID. posts are supposedly immutable, so the blocked server will retain the blocked version of the post permanently.), or as additional fake posts to obfuscate the user's interaction patterns.
Privacy on masto is still severely lacking and we were promised OCap a while ago but that still doesn't seem to be ready. It makes me wonder - have there been any new privacy features since last time I was here, besides authorized fetch? Additionally, this particular feature wouldn't be user-facing, thus avoiding user confusion (ideally). It might also behave "badly" when boosts are involved - the blocked instances might federate such blocked posts to other instances - but this is also intended to encourage other instances to also join in on the block (or, ideally, have some way of not trusting boosts from those blocked instances. or, unfortunately, block the instance that blocked those instances.) and also to encourage mutual blocks. It also doesn't require protocol changes, as far as I know.
According to similar issues (https://github.com/tootsuite/mastodon/issues/13548 https://github.com/tootsuite/mastodon/pull/11562) explanations for bans or any interaction that would reveal that are not going to happen. We rather send traffic into the void and encourage bans which you can't do or know anything about.
that's how you leak posts to blocked instances.
additional fake posts to obfuscate the user's interaction patterns.
So you would never know if someones post is real or not as long as you don't know if they blocked you? Wouldn't this completely destroy trust in anybodies posts?
Also AdNauseam and TrackeMeNot does not help against really precise targeting as all your information are still in their plus some extra noise. If someone knows for what they are searching they will still find you even with the exact query as before and all your private data is still in the system plus some added noise. If I want to protect my information this does not help and only uses more resources.
I don't believe that it is possible to hide blocks and still openly federate with the network at the same time. If I am asking for a specific resource I know exists and no one answers me happily and I still get served other resources than I am probably blocked.
Please prove me wrong but I just don't like the idea filling my storage with not to useful data.
I'd much rather turn every real post into "you've been blocked. now go feel bad about it." + add additional fake posts "you've been blocked. now go feel bad about it." to ward off against gab, than keep leaking the real posts to gab through other ppl's instances. also this is not meant to be used literally all the time, for every instance block - just the ones you want mutually blocking you.
the former prevents gab from storing my (actual) data. the latter lets gab store my (actual) data when it gets to them.
(what does this have to do with "filling [your] storage with not to useful data"?)
Would this mean if someone blocks my instance and I want to access their posts that my database gets filled with fake posts?
Also how should instance a know if instance b has blocked instance c without public blocks? Also if I need to ask instance b if it has blocked instance c than I could enumerate that.
if someone blocks your instance and you want to access their posts, uh, don't? that's literally block evasion. just suspend them (to remove the unwanted data) and move on.
you'd know, by seeing the posts from instance b, shared with you by instance c, that instance c has been blocked by instance b. you can then filter instance c so it doesn't share those posts with you, or suspend it altogether, either with or without poisoning it as well.
than keep leaking the real posts to gab through other ppl's instances
Clarification: Posts don't leak to one server through a third server because the look-up always goes to the origin. Boosts just share an ID of the boosted post and any receiver has to go and look up what it is from its origin. At least so far as Mastodon is concerned. GNU social used to have a bug where it would gladly accept reshares of 3rd party posts without checking the origin of the post, so when one GNU social server added an internal word filter that replaced certain keywords with slurs, it would appear as if the people whose posts were re-shared through that server were using slurs to other servers that received those re-shares. Technically someone could choose to return to a system like that but it would not be in their best interest, I think, as that would completely erode the trust that any message actually comes from the account it claims it's from.
oh. huh.
so it won't leak the fake posts to unrelated instances, then. that's... very interesting. (I actually thought that if the signatures matched you could do whatever, including forward posts from other instances.)
I have updated the OP to reflect this.
can we get this feature so we can apply peer pressure on other instances' users?
Most helpful comment
Clarification: Posts don't leak to one server through a third server because the look-up always goes to the origin. Boosts just share an ID of the boosted post and any receiver has to go and look up what it is from its origin. At least so far as Mastodon is concerned. GNU social used to have a bug where it would gladly accept reshares of 3rd party posts without checking the origin of the post, so when one GNU social server added an internal word filter that replaced certain keywords with slurs, it would appear as if the people whose posts were re-shared through that server were using slurs to other servers that received those re-shares. Technically someone could choose to return to a system like that but it would not be in their best interest, I think, as that would completely erode the trust that any message actually comes from the account it claims it's from.