Right now, Mastodon downloads local copies of:
On these local copies, Mastodon can perform operations like resizing, optimizing, creating thumbnails that fit Mastodon's UI - because the origin of the content can provide media in super large sizes that would severely impact end-user's bandwidth and browser performance if just displayed verbatim.
Moreover, bandwidth is not always cheap - it is capped to something like 1TB/mo on DigitalOcean, and is super expensive on Amazon S3, so hotlinking images and videos would severily impact owners of small instances, when lots of users of large instances would view their content from their public timelines (or even just home timelines through boosts). It does feel fair that an instance's admin is responsible for serving content to their own users, rather than also to users of other instances, which should be their admins' responsibilities.
However, this has storage and legal implications. I would like to hear your thoughts on how this can be improved.
Potentially low hanging fruit: You could reduce (but not eliminate) exposure if caching was disabled (or TTLs set very low, or caching limited to RAM-only on a swapless machine) for content that was tagged #nsfw.
More complicated, is to have feedback from the blocking/reporting into the cache layer, to quickly purge data users have flagged as objectionable. This is a rabbit-hole of complexity, but probably worth doing if you intend to keep the cache.
...
Another benefit to an instance loading media on behalf of its users, is it improves slightly the privacy if the instance users. Browsing the federated timeline or boosted tweets won't automatically leak your IP to an instance you have no pre-existing relationship with.
Yet another benefit: resizing images may disable/thwart exploits based on corrupt data (the instance itself is at higher risk of this though, and browsers are arguably better hardened/tested than the image conversion libraries used server-side).
I see a lot of benefits to what you are currently doing and I think it is the right thing for both users and the health of the network. However, the risk to admins is real and serious. Just my 2c, hope this is helpful. :-)
I'm seeing people hotlink to images from Twitter, so that's also a consideration: if you freely allow hotlinking you run into the potential for people to use your instance as free image caching. Kind of a separate-but-related issue.
I like the principal that an instance should be responsible for serving content to their own users, but I wonder if there should be a distinction between short-term and long-term storage.
Most of the traffic for any given piece of media will occur within 24 hours. After a certain period - 7 days? 30 days? - its mostly just being kept around for archival purposes. At that point, it may make sense to revert to the hosting by the original instance (problematic if that instance goes offline) or to some third party host or federation of hosts - and the instance can negotiate retrieval if the image gets requested again.
I am intrigued by Swarm as a peer-to-peer means of long-term data caching.
Maybe you can add another class of trusted instances. Now you can silence or suspend instances, but if you add the option to trust certain instances, you can take actions on them. Like this:
Trusted instances: Cache for a longer time and locally keep all media .
Normal instances: Cache for a shorter time and locally keep all media for a limited time (e.g. 1 month and after that hotlink)
Silenced instances: Cache for a shorter time and hotlink all media. Silence on federated timeline.
Suspended instances: No caching, no hotlinking. Block all communication.
If people are storing on an expensive network they might look at self-hosting with Minio as their server back end, and then they can manage that storage queue themselves.
Minio will also federate across servers, so conceptually n servers could set up 2n+1 spindles, connect them all together, and have a shared cached file system.
I would more like "no cache at all" for "Normal" ones, since legal don't take into account the "time" we host it...
@Gargron Today, what is the TTL of the cache? It seems to be 14 days, but we are not sure about it.
I believe the current system is good because it protects users, and this should be the absolute priority (technically, but also privacy speaking). I really think hotlinking medias would be a bad idea, because of privacy. It would be ever worse if you hotlink « bad instances ».
Maybe you could allow the admins to configure the TTL? If I don't want any risk, I put a TTL of 1 minute, but I'm advise that it will be a little CPU intensive. Maybe I will do 1 day, or 1 week if I am careful. Maybe I will do 1 month if I don't care (personal instance, for example).
Another idea could be to separate the instance medias from the local copies. The admins could then have a sense of what is consuming storage.
Extending the existing domain blocking feature to allow admins to choose not to cache media content from certain instances (without having to suspend them) could be a viable (and relatively easy-to-implement) approach.
// Edit: Turns out there already is a hidden reject_media domain block type, so that's great news.
We could use a Camo instance: the URL of the image is still yourinstance.tld… but Camo proxies the request to the actual server that have the image. And for caching, we could put a Varnish between Nginx and Camo. It's working for images, but I don't know if it will work for mp4.
This way, your instance would never download content from other instances.
But it's still distributing it...
Which is what we want, I think.
@maethor I'm worried of "illegal" content coming "from" my instance tbh, proxying stuff make it "on" my instance in a legal way :/
@maethor Current TTL is "forever". I think this is part of the things that need to be adjusted
@ldidry With Camo it seems like you would be doing the same as we're doing now, but with more effort since you'd still need to implement the actual caching oni top of it. Perhaps just adding TTLs on the current system would be better?
I'm [email protected]. Moving my arguments over here because I feel they're worth considering.
The solution to address @technowix's concern is to not cache remote images. That's the only way you're going to address that concern, and the next time an instance admin posts loli or shota or outright CP.
The solution to address @BjarniRunar's concern about user privacy is to place the onus for user privacy on the user.
Here me out.
It is my opinion that instance admins don't have a responsibility for protecting users' privacy, and that should squarely fall on the responsibility of the user, and here's why.
Each user has a different threat model that they're concerned about. Some might be located in repressive regimes, others might be sharing their family's computer in the living room. Is it appropriate for an instance admin to try to provide protection for users posting from China or that gay teen posting from Iran or Saudi Arabia? What about that home-school kid with strict parents who only want their kid to be exposed to ideas that they approve of? What about that kid who's using a school laptop with spyware installed that they can't shut off? What about the political dissident under surveillance by their Government for views that are contrary to accepted norms?
Each of those threat vectors can be addressed through separate means.
As I replied to Bjarni on .social, I feel that it is up to me and every other user to take privacy into our own hands. Tor is free, and if you can afford it VPNs aren't that expensive.
I mentioned CloudFlare as a viable option for instance admins as it's the most well-known and accessible CDN. It also has the benefit of protecting instances from the "slashdot effect" or if something should go viral, or from DDoS attacks if somebody posts a toot that pisses some group or person off.
In my personal opinion those should definitely be considered as viable solutions and alternatives.
(Edit: a word)
@Gargron Nope, it's not the same. You said:
Right now, Mastodon downloads local copies of:
Camo downloads the images, but don't store them anywhere. It's just an image proxy. The Varnish I suggested is here to cache the images, but only in memory (well, you can make it cache them on disk, but it's not the default behavior (at least on Debian)). If you restart Varnish, you wipe all the cache. And the cache has a limited size, so new images replaces old ones in the cache system.
I should point out that what I mean by instance admins not having an onus to protect user privacy, what I mean is it is the users' responsibility for protecting the privacy of their web surfing habits.
Instance admins definitely have a responsibility for ensuring that SSL is enabled, is properly configured, that their servers are regularly patched and updated, and that any security vulnerabilities they discover or that are brought to their attention are promptly taken care of.
Instance admins also have a responsibility for ensuring that the software they're running is properly configured and that they take steps to prevent any data leakage incidents (lock down their servers, don't expose configuration files and passwords to the internet, etc.)
Above that, I do not feel it is an instance admin's (or Eugen's) responsibility for trying to protect every user from every real or perceived threat that may be out there. If you attempt to apply protection to the lowest common denominator or user on your system you are not going to be able to provide effective protection for most of your users.
@ldidry But ideally you'd still crop/downsize images for the end-user. I just meant that the cache wiping could be part of the current system, rather than replacing the current system with Camo
hi, this is tryum on apoil.org instance !
This morning I threw some ideas, I don't know if it's viable or feasible :
-If content is crypted on the storage and the keys are distributed via another channel (to decrypt the content client side), does it still expose the admin to legal threats ? (must be state dependent).
-hotlinking the medias to the source instance, but distribute them also via p2p (ie webtorrent or any webtrc datachannel tech...) : The more viral the toot goes, the more distributed the media is, thus preventing small instance from slashdot effect.
If those ideas are silly, please be gentle, I'm not a web techy ;)
@Gargron Hi, I'm pawoo.net founder (pixiv inc.).
We understand that mature images uploaded by our users are potentially problematic from the legal view, since it could be hosted in servers in other countries.
However, we would like to protect our users' works as possible unless they are illegal in Japan. We are caught in a dilemma.
We, as pawoo.net admin would like to obligate users to flag mature contents as NSFW. And as for images in server, we propose Mastodon...
We are in compliance with the law of our country, and deal with our content.
We spare no technical effort to resolve this problem.
The Varnish I suggested is here to cache the images, but only in memory (well, you can make it cache them on disk, but it's not the default behavior (at least on Debian)).
@[email protected] here.
It would be illegal to have CP or CP-like images in the non-volatile cache (aka RAM) here in my jurisdiction. Yes, it's hard to prove for law enforcement, but anyways - the laws are there. Although there are somehow some EU laws that prohibit this, they haven't been adopted into local laws.
Furthermore, I don't know if Cloudflare or any other big CDN is doing some kind of "matching" or "scanning" on the media they are delivering, so that might not be an option either, as the "mastodon train" is picking up speed (sorry for that - imagine a little mastodon sitting in a train. It's super-cute).
@Tryum Encryption might be an option. But distributed encryption is somehow complex, I guess.
@norio Cool that you are here! I guess it's not the NSFW per se, It's more the lolicon content, which is somehow problematic in western countries.
Just my 2 cents :
I hope this may help.
This is a fundamental disconnect between the law and technology, and I doubt there is a technical solution. CP laws are so broken around the world that even trying to police CP content can actually cause you legal grief (a team I'm part of was in the past told by a lawyer to stop filtering out known CP content hashes from a system, because that was a legal liability). Mind you, that's for real CP (real children), not loli (drawn content), but the latter is considered equivalent in some jurisdictions...
Good luck with the attempt at a technical fix, but I will be very impressed if you manage to find one. I would suggest getting a lawyer if you want to accurately evaluate the legal implications.
@gellenberg
If would be great if we could rely on users to manage their own privacy, but many - probably most - users simply don't have the depth of technical knowledge that would equip them to make good decisions in that regard, much less the skills and time to effectively implement those decisions. They will tend to default to what the platform provides. If the platform wants to protect its users, it should be proactive in providing sensible privacy features.
Maybe, in the short term, as @norio mentioned, it might be an option to not cache NSFW flagged media. Although this would mean that we need to deal with higher traffic as an instance admin - so, at some point, people would start requesting a "block NSFW from my instance" option, to save bandwidth. Which would, in turn, mean some kind of censorship, which we don't want at all.
Gosh, maybe the technical problem is even our smallest one...
There are images that are not NSFW and will still cause admins in certain jurisdictions legal trouble. Looking at it only from the loli angle is very American centric.
@Tryum wrote:
-hotlinking the medias to the source instance, but distribute them also via p2p (ie webtorrent or any webtrc datachannel tech...) : The more viral the toot goes, the more distributed the media is, thus preventing small instance from slashdot effect.
I fully support this, and would LOVE to see Mastodon implement something like WebTorrent for media delivery. That is the holy grail to me.
@Norio wrote:
to block NSFW images of other instances, just showing NSFW label with a link of the original content.
This to me seems like a fair and just compromise.
@eeeple wrote:
caching as it is implemented right now is in its infancy, and could really use more customization. It should be up to the instance's admin to choose the content caching policy.
I totally agree with this. An instance admin should be in total control over what content is stored on, or passes through, their server, since they're paying for both. :-)
@spikewilliams wrote:
many - probably most - users simply don't have the depth of technical knowledge that would equip them to make good decisions in that regard
What a golden opportunity then we have (all of us, users and admins) to educate users about important skills that users can use both on Mastodon and throughout the rest of their internet "lives"!
If the platform wants to protect its users, it should be proactive in providing sensible privacy features.
Sure perhaps. Just like there's ProtonMail.com and Gmail.com. One does a better job at protecting their users privacy than the other.
(Edit: speling mistake or two.)
@delroth Absolutely. Think of e.g. the DMCA implications of tooting cryptographic material. That doesn't even have to be media, you can fit a lot of things breaking a lot of laws around the world in 500 characters.
@delroth It all started with the "lolicon-controversy". I agree that there are other, non-NSFW images that are problematic, e.g. nazi symbolism here in Germany. But the laws are not so strong on this type of images as on CP. :)
@macan I guess that's another issue, isn't it? I can ban a user which toots non-legal stuff on my instance. But toots from other instances won't get stored on my server. I guess that's the point here. Correct me if I'm wrong, though.
I'm wondering if caching of incomplete content could solve the problem here. If I store half the data of an image on my machine, I might not face legal issues. But that's only "amateur lawyer" me.
Also looking from the other side, caching/hosting perfectly "US legally safe" erotic images in Japan may be troublesome if it's not censored.
I would think there should also just be an option to globally not download or cache anything server-side, as there is with GNU Social, if you just don't want to deal with this mess. Your users would have to choose to click-to-load things, and it wouldn't be your liability as an admin (though IANAL, YMMV, etc etc).
Maybe this already got brought up and I missed it?
Maybe this was mentioned here already and I just didn't realize.
How about blocking the cache based on content warning tags obv this would require there to be a proper list of what warnings to use globally but it could somewhat fix the problem, if you crack down on people who don't properly tag their toots.
@BrainShit this does not scale. 1. Defining this list will be extremely tricky. 2. It requires cooperation from everyone to tag their content properly, but the tagging does not matter to the author, only to their subscribers (if we assume authors share content that is legal in their servers' legislation). But even more importantly, 3. Laws change, and people won't retroactively go and re-tag older content.
Hell no, please do not use P2P to share content between users. Downloading of some content may be legal when uploading is not (for example, in France, about copyrighted content like movies). P2P works in both ways.
For a platform, you have to moderate uploaded content to have a chance to stay legal. For a user, you are fucked.
It seems too difficult to have users know whether or not the content they upload is okay globally. Not caching media flagged NSFW (or more like, not legal for other instances) does seem fine to me, but that is relying on the users to be able to correctly judge what is okay and what is not.
KitRedgrave's suggestion of just having an option to globally not download/cache stuff seemed nice to me.
@delroth 1. I do agree with that one
But I guess you are right with this doesn't really scale overall
There needs to be some kind of solution for this. I'm an instance owner, and I was following accounts on pawoo.net for more benign content than what is being discussed here, and was cut off from it, because illegal content came along for the ride, and I can't store or display that on my instance, since it's illegal, and also really disgusts my users.
I'm not sure how to solve this. Some kind of specific field for "Disturbing Japanese stuff"? Even some people who like normal pornography would want to avoid this type of content.
In Japan, they care more for the age of the viewer than the age of the character being depicted, which is a huge problem for Westerners, who care about both.
@stefafafan Huge swathes of content are "not OK globally". Just think of what North Korea thinks is OK.
@thor-the-norseman Topic-specific tagging does not scale. See @delroth's reply. Just because you feel strongly about this specific type of content doesn't mean it's special in the grand scheme of things. Lots of people feel strongly about lots of things, and lots of jurisdictions make specific things illegal, or not.
If you are thinking of singling out loli/"Disturbing Japanese stuff", think about what you'll be doing when a Russian admin comes and asks you to single out "gay propaganda". If you think one is OK and the other is not, you are basically doing imperialism.
You know, personally, I'm quite liberal about this stuff. It's just that the law is forcing my hand here, together with cultural norms.
This reminds me a bit of issues Norway has had with Facebook. Issues similar to what Japan is having now. We are more lenient about full frontal nudity if it's artistic or educational here in Norway, but much less lenient about graphic violence than Americans, yet Facebook has one set of rules for the whole world, which leads to an absurd situation where Americans are telling us what we can say in Norway.
The onus is clearly on us when it comes to blocking and labelling things we happen to find objectionable. The problem is that manual moderation on the receiving end really isn't practical. The only practical place for it is with the sender. Unfortunately, that's not really their problem. Blocking entire instances is a terrible solution, but I struggle to think of a better one right now...
The issue with hotlinking isn't really about privacy strictly (though it has privacy implications) but with user consent. If the user's client (bet it web or mobile or desktop) just loads an image from a federated instance, then the user's device connected to a server they don't necessarily trust or want to connect to.
Click-to-load, like @KitRedgrave brought up, seems like the best solution. It also lessens the load on the federated instance, since media will only be loaded when a user requests it. Maybe don't cache media from remote instances by default, but cache it for instances that are marked as trusted?
So users would see a box similar to the NSFW box, but saying something to the effect of "This media was posted on a different instance that might have different rules or code of conduct. Click to load."
That protects server admins. It doesn't perfectly protect users, but at least the user has the opportunity to consent/not consent (and is much less likely to face legal repercussions). It also sidesteps the issue of needing to have an algorithm figure out whether something is illegal in a given jurisdiction; users can judge for themselves before clicking (of course, it doesn't solve the problem of Japanese users writing "warning loli" in kanji that Westerners can't read, but baby steps).
And, as a future wishlist feature, you could let users personally whitelist instances/users they want to automatically get media from. The media would still be hotlinked but they'd be spared the click.
As another desirable feature, you could let instances define pre-set content categories that other instances could use to filter caching/silencing. So pawoo.net could make a "lolicon" category and expect its users to abide by it, and other instances could rely on that filter, but it wouldn't be hard-coded or built-in, and it wouldn't be the sole line of defense against this stuff.
Also, it's revolting to compare Russia's grotesque and discriminatory anti-LGBT laws to laws against loli content (whether you agree with the latter or not), and I don't believe for a second both of those have to be treated equivalently.
@thor-the-norseman This goes back to the second part of my post. Jurisdiction-specific content tagging does not scale. There is no way for users to know what subset of countries their content is legal in or not. We pay lawyers obscene amounts of money to figure stuff like that out. Eddit addendum: there will always be things that are legal in any given jurisdiction and not in yours: by that token, you'd have to potentially block all instances not hosted in your country.
Large, American-based multinational services do indeed lead to absurd situations like that, where American values are imposed on the rest of the world. And even then those companies spend lots of money to fight or work around or compromise on local laws.
Also, click-to-load as a default helps deal with one user on instance A posting disturbing/hateful/illegal content while the admins on their instance are asleep; people on federated instances wouldn't load or see it by default.
My 10p: since instance admins are going to fall over this issue on a country by country basis, you need to give instance admins their own options to deal with it.
So, an instance should be able to define which instances it will and will not cache.
Ideally:
I'm thinking about something different, which is even scarier: Even if the image is hosted on another server, it is displayed on my site, under my url. From a law side of view, I'm then responsible for the content. However, I don't know if I need to proactively filter that.
_So, as always, the internet is far more advanced than RL. Crap._
Caching doesn't help. Many jurisdictions make it illegal to _link_ to illegal content.
It seems to me then the only real solution for admins is to host their instances in a jurisdiction without laws against linking, or silence instances that might link to illegal content entirely. The option to only silence media posts seems reasonable as an alternative there, but under a sufficiently broad interpretation of those laws you could maybe get in trouble for merely federating any post from an instance that allows something illegal?
Hey folks, my day job is as the International Director at the Electronic Frontier Foundation — I'm not a lawyer, and I can't give anyone direct legal advice, but I can give you a rough opinion based on my experience in the US and elsewhere. Also happy to refer people to our intake process, and consult on a legal FAQ if you want to got that way.
My two cents:
a) In terms of Mastodon code, I'd concentrate on the bandwidth issues. It's hard to completely solve a legal problem— let alone a legal problem across multiple jurisdictions— with a technical solution. If you find a way, either by lowering TTL or offloading the local cachine elsewhere, to minimize your caching, that will also minimize the legal issues. Giving people the option to turn off caching and therefore avoid a huge chunk of the potential liability is a great idea.
b) In my experience, CP is a peculiar special case in most jurisdictions, but also very very rarely prosecuted in terms of simple caching. Law enforcement is aiming for a very particular set of cases in this area, and when they pursue someone who isn't a deliberate producer or disseminator, say a Tor provider, they do it by mistake -- and they're generally eager to correct the mistake.
Large corporate hosting services from Google down have also dealt with this issue and, even when there is not clear law or precedent, prosecutors and courts do understand explanations of caching, and recognize that, as long as their was no intention and content is deleted, you're not the real target here. (It's different when courts or law enforcement are aiming to arrest a person for other reasons, particularly in repressive jurisdictions). The more time goes on and the smarter LE gets about the Net, the better this works out.
c) I'd flag here that IPFS (and other P2P caching/distribution systems) are working toward creating a distributed CDN-like networks that would really help with sharing bandwidth and responsibiity across GNU Social. Looking in from the outside, it doesn't seem quite ready for Mastodon-level use at this point, but it's definitely getting closer — perhaps when they get to the point of talking with the FLOSS browser makers about inclusion within the browser might be the moment to consider that as a solution.
If it helps I can see if I can get some folks together to give an informal Q&A or somesuch to @Gargron or the developers on Slack^W Discord. It's hard to give specific advice, but we talk about the general case.
@Exagone313 I don't think dowloading/owning for copyrigthed content you didn't pay for is legal in France, it's just "less" illegal than sharing/uploading (ie duplication bits), and more difficult to prove.
If what you share through p2p is what you have in browser cache, it may be the same as just storing the cache most of the time.
There won't be a solution to fit all case : I think the goal here is to find a way that fits the most cases without restraining liberties for all cases.
Silly solution #3 host all instances in international waters so every admin will be safe !
Hint : I live in france too ;)
@dannyob We don't use Slack, but you are welcome on my patron discord, where we discuss Mastodon development and related issues. Happy to give you an invite link over e-mail ([email protected])
That's good information, @dannyob, albeit it doesn't quite solve the fact there's a huge amount of moderation work to be done here if we are to allow Japanese instances, and it'll be repeated thousands of times by almost every country in Europe, North America and other places. Probably millions of hours spent on flagging content that Western culture collectively frowns upon and is often outright disturbed by.
If one applies utilitarianism to this issue, and say that the needs of the many must outweigh the needs of the few, the moderation job shifts to Japan. I guess we are already putting that pressure on them by shutting them out, for lack of better options. To federate with us, they'll be forced to solve the issue locally, really...
Hi, I'm a developer in pawoo.net.
Sorry for late reply. Most of Japanese engineers are sleeping now, because It's midnight ;(
there is with GNU Social
It's really important that how we should think about our content publishing policy. We have so many art contents and Japanese style drawing which need to publish in freedom of expression. What can be said even in other countries and it's necessary to comply with the law in our countries.
We have an idea. Add admin feature, then we can switch the publishing status. "Don't send any images", "restrict sending images(NSFW)" or "send all images". The instance server and browser never get any illegal images.
@thor-the-norseman I still find it strange that you're so intent in singling out Japanese instances here, as if they were some kind of special "problem child" (no pun intended). Again, just because you (and a large, but by no means universal collective) have a problem with this content doesn't mean it's a) pervasive in Japan as a country, nor b) fundamentally against any kind of global median standard of values, nor c) objectively wrong or damaging.
This is just one example of this kind of problem. Solving the "lolicon content from Japan" problem isn't going to solve anything in any kind of greater scale. This is just one of many permutations of the problem that collective of people A do not like content that collective of people B consider perfectly fine.
(I could give you some objective arguments as to why this content is objectively no different from other content which is widely considered appropriate in Western society, but that would be straying off topic; just consider viewing this issue through a more general lens).
@sequitur:
So users would see a box similar to the NSFW box, but saying something to the effect of "This media was posted on a different instance that might have different rules or code of conduct. Click to load."
This, yes!
And, as a future wishlist feature, you could let users personally whitelist instances/users they want to automatically get media from. The media would still be hotlinked but they'd be spared the click.
Yes! Give the user the control, absolutely.
@dannyob:
Giving people the option to turn off caching and therefore avoid a huge chunk of the potential liability is a great idea.
Agreed.
I'd flag here that IPFS (and other P2P caching/distribution systems) are working toward creating a distributed CDN-like networks that would really help with sharing bandwidth and responsibiity
Totally forgot about IPFS. IPFS is an AWESOME project. A way to integrate IPFS with Mastodon (and GnuSocial) would be fantastic and would provide a distributed media filesystem for the entire Federation.
@marcan:
Again, just because you (and a large, but by no means universal collective) have a problem with this content doesn't mean it's a) pervasive in Japan as a country, nor b) fundamentally against any kind of global median standard of values, nor c) objectively wrong or damaging.
This is just one example of this kind of problem.
Totally agree with @marcan on this point, and I hate to admit it but the Russian problem about LGBT content as eluded ealier could be even more pervasive than Japanese hentai.
For example, I remember reading that LiveJournal is now owned by a Russian company that is imposing Russian law on its members which has a lot of users and former users there up in arms.
@marcan I take an interest to Japan, and know quite a bit about the country and its culture. Again, please understand that I personally do not object to this type of content. This is _not_ about my personal opinion, but about actions I have been forced to take by law, and reactions I'm getting from the users on my instance.
I agree that the problem isn't specific to Japan, but I have had no issues with content from other countries as of yet, so I don't feel I have much to say about it.
I am actually here because I'm _upset_ that I have been forced to block an instance.
This is just one example of this kind of problem. Solving the "lolicon content from Japan" problem isn't going to solve anything in any kind of greater scale.
I think so.
@furoshiki I would like to federate with your instance, but I am currently forced to block you. This makes me very sad. I hope we can find a solution.
I would just like to speak up here and say that there are a lot of privacy and security implications of not caching images that haven't been brought up yet (at least, in my skimming of the thread)
It would be very easy to deanonymize users if all content was hotlinked—connecting loading content to boosting, liking or replying to it would be very trivial, and easily done using data analysis.
Currently, this isn't a problem, because all requests go through the local instance, so it's impossible to tell one user apart from another. Changing this so that instances can—even if it's a choice—expose their user's IPs to the world as a built in part of the software would be a HUGE step back, in my opinion
User privacy cannot be foisted onto the user. It has to be a deliberate decision we make as software professionals.
I didn't mean to single out Japan and I hope I am not being disrespectful to Pixiv - lots of respect to the company and huge props for jumping in on the project.
However this situation has simply revealed a deficiency in the current system that must be amended.
@nightpool It would'nt expose the IP more at all since it's the instance that share the content...
The problem here is that the content of "other" instances is stored into our server, who cause legal issue in some juridictions...
@Gargron, it's okay, I got one with my Patreon donation and I'm there already :)
@thor-the-norseman Fair enough. Do consider feedback bias too (you're only going to get reports from people who object to the content, not those who don't). We should separate the personal objections with the legal issues. The latter are an entirely different side of the story, and one where @dannyob and the EFF can probably provide a lot of guidance on. It's worth keeping a strong mental firewall between "what I or people around me consider objectionable" and "what the legal requirements are".
@thor-the-norseman you raise an excellent point, but this issue probably isn't the place to discuss it at length, and it's not really amenable to an obvious technical solution. I'm talking a bit about that over here: https://mastodon.social/@mala/2679038 (sorry I haven't threaded the subsequent statuses)
@marcan This whole issue is so new to my instance that I'm literally piecing together the logical conclusions of my thoughts about this as we speak, so I'm not actually done forming my opinions on this. When I started my instance, I took a liberal stance and said "Norwegian law will apply here". I think I will basically have to apply that very, very consistently, without mixing my empathy for upset users into it.
@marcan I was prepared to allow very controversial opinions, but had not mentally prepared for controversial _imagery_. I guess I naively expected users to basically tag "typical NSFW content", and that turns out to not be the case. And I guess I also didn't expect instances to host content _illegal_ in my country.
@Technowix But if you don't store other instances content on the server you will expose the IPs of your users—unless some sort of proxying solution is invented, which hasn't been brought up at all as far as I can see. And those IPs will be associatable with individual mastodon users through timing analysis.
Also re the legal liabilities of instances in the EU: the Electronic Commerce Directive apparently limits the liability of people who are merely caching content. I'm trying to find a good FAQ about it, but until then Wikipedia will have to suffice: https://en.wikipedia.org/wiki/Electronic_Commerce_Directive#Liability_of_intermediaries
(there are much stronger provisions in the US, in the form of Section 230 of the Communications Decency Act. I think many people have already mentioned that this point)
again, IANAL. if you think you may run into legal issues, hire a lawyer, ask for a free consultation, look into the EFF intake process.
@nightpool That information is a relief to me. That inadvertent caching is probably not going to get me intro trouble. Hopefully, for the EU/EEA countries, we can then focus more on the _delivery_ of content.
@thor-the-norseman As far as display goes, we can start thinking of pragmatic solutions. Ideally those that do not censor, but rather protect. Ideally those that allow individual choice and do not rely on server administrators.
I'd suggest perhaps a per-user preference to greylist specific instances, perhaps with a default chosen by the user's instance's administrator, such that viewing media from those instances requires a click-through. Something along those lines sounds like it would help solve the "I don't want naked cartoon children in my face" problem without actively censoring content, and giving users choice.
@nightpool Oh okay, I see what you mean, I was thinking of the "publisher" of the content. Thanks for clarification.
So, If I understand this right, even if this kind of "CP lolicon" content is illegal (in France here), as long as I protect user from being "forced" to see this kind of content, I can just "hide it" and lette peoples follow them ? :/
@marcan That's much better than other proposals I've seen. It does walk on the edge, though, since it doesn't actively try to enforce the law, and permits users to break the law on an individual basis. It's a bit as if The Pirate Bay basically flagged its illegal content (i.e. almost everything there) as "illegal" without deleting it, yet allowed users to download the content anyway. It doesn't sound like something the courts would be satisfied with, but I could of course be wrong...
@thor-the-norseman Again, two different sides here. That proposal is not in any way, shape, or form intended as a legal solution. It is intended to solve the personal issue of certain users not liking certain content that may be particularly prevalent on certain instances. It is intended to give users a choice of what to see. Nothing more.
The legal side of the coin is a completely different problem. Ideally we'd be operating in a manner, such as according to the legislation that @nightpool linked, where the service provider (i.e. instance owner) is insulated from incidental illegal material in their jurisdiction. We need lawyers to work on that, but it's worth noting that it seems that, broadly speaking, trying to police content yourself might actually expose you further here (see "(a) the provider does not have actual knowledge of illegal activity" and my story of a run-in with a lawyer in one of my early comments).
With regards to IPFS, I would be very excited to see it being applied to Mastodon! It would certainly result in decreased bandwidth usage, and the IPFS project would receive a much needed popularity boost.
If needed, I can deal with the technical aspect of transitioning to IPFS - just tag me if it ends up being looked into.
@Technowix I don't know enough about the EU caselaw in this area to be comfortable saying what is "reasonable conditions on access to the information". I would recommend trying to find an FAQ from an organization in your jurisdiction on this issue (the liability of internet service providers), if one exists.
@marcan Ah, so you are discussing the personal preferences part of the issue now.
When it comes to law, though, here is an example of what the law of Norway has to say on the issue:
With a fine, or prison for up to 3 years, is punished one who:
a) Produces a portrayal of sexual assault on children or who portrays sexualization of children,
b) Releases, offers, sells, transfers to another, makes available or otherwise seeks to spread portrayals as outlined in letter a,
c) Acquires, introduces or possesses portrayals as outlined in letter a, or deliberately acquires access to such material,
d) holds a public speech or facilitates a public show or exhibition of portrayals as outlined in letter a, or
e) deceives someone under the age of 18 to allow themselves to be portrayed as a part of commercial portrayal of moving or unmoving pictures of sexual content.
With "children", this clause means persons who are or appear to be under 18 years of age.
One who inadvertently carries out an act as outlined in the first clause is punished with fines or prison for up to 6 months. In the same way is punished one who possesses or superior who deliberately or inadvertently neglects to prevent that there in a business is conducted an action as outlined in the first clause.
Punishment can be avoided for one who takes and possesses a picture of a person between 16 and 18 years, if this person has given their consent, and the two are of similar age and development.
The decision does not affect portrayals that must be considered defensible from an artistic, scientific, informative or similar purpose. The decision does not apply for film or video that the Media Council by pre-control has approved for commercial display or sale.
It's the "inadvertently" part that worries me.
@thor-the-norseman This is the point where you should start to talk to a lawyer. There may be conflict between Norwegian law and European regulations here. There may also be practical details about enforceability and active enforcement to consider.
Boy, if lawyer expenses are part of running an instance, I guess I'll need to set up a Patreon sooner than later...
@nightpool
It would be very easy to deanonymize users if all content was hotlinked—connecting loading content to boosting, liking or replying to it would be very trivial, and easily done using data analysis.
In my opinion, this is precisely why Tor should be used for any user who is concerned about their privacy, and a golden opportunity to educate users about the privacy implications about browsing the Internet in general.
User privacy cannot be foisted onto the user. It has to be a deliberate decision we make as software professionals.
I still think that educating the user as to the dangers and providing them with education and links to tools they can obtain to better protect themselves would be better.
(I haven't read the whole thread yet, my apologies.)
I still think that educating the user as to the dangers and providing them with education and links to tools they can obtain to better protect themselves would be better.
I think it's best to build the software to not enable dangers in the first place and remove insecurities which might lead to false assumptions of security in users. Empowering people to learn on their own about ways to protect themselves is important too, but to construct a system which is relatively secure (federation has its problems, at least in regards to the chain of trust) has the highest priority.
Okay. Personally, I'm going to take my chances and change my block to "Silence". But I can't find a button for that. Time to create an issue?
When I try to separate different threads of thought on this topic, the following comes to my mind:
For the japanese case specifically, having the technical means for a japanese instance admin to select "don't share images / nsfw tagged posts except with a whitelist of servers" would probably solve this issue.
Just my two cents.
Maybe Federation should be a two-way opt-in, not unlike cjdns does with exchanging keys between nodes and neighbors. Each instance must negotiate with another instance to federate. Then they could decide on a mutually acceptable use policy/ content policy (either administratively or technically), and then of course users have to choose to follow one-another.
The downside is users would only be able to follow users on an instance that the admin has agreed to federate with.
Not ideal in my opinion, but might help with the censorship.
We should resolve other problem, which depends on this issue.
Now the mastodon servers always load images and cache those. Our social network is getting bigger, so we will not be able to build new instance easily. Because the instance needs huge storage/network resource. The strategy of local cache is bad design for growth of this system.
I suggest the cash strategy is more selective for publishers. The subscribers can choice that they load and cache images or not. But publisher can tell subscribers that "you must not cache it". Subscribers can reduce the hardware resource and publisher can control providing content.
The problem I have with the current cache model is this.
If a useruploads something shitty on an instance... it gets copied to all other instances. Then if that user deletes the post or an admin deletes the post. There's no way to delete it on remote instances.
I run an instance and I do not want to have no ability to delete content I don't want. I want to know that each instance admin keeps the rest of us safe. This whole conversation of tagging offensive content so it's then federated out as bad, it's a long term solution.
The short term is to stop caching. The long term is to engineer a system to tag content.
But I don't feel safe running an instance anymore. This whole problem today showed me that essentially while I can control my instance and it's users I cannot control others.
Mastodon is a network built on trust and honestly I cannot trust everyone. Nobody can.
So for the love of everything. Stop the caching at least till there is a solution for the long term.
I think @danmademe hit the nail on the head. I used to run Friendica, RedMatrix, and GnuSocial instances in the past. They all had some level of media caching and hosting locally. One of the reasons I quit was because the storage and bandwidth costs got to be extreme (and nobody wanted to donate any money to help offset the costs).
I had over a TB of media content alone being stored on one of my RedMatrix instances which caused me to go from a VPS to a dedicated server at considerable more expense.
@danmademe Good idea.
@gellenburg
costs got to be extreme
Other instance team also worry about it.
Remember....
Cops do not care you don't post the content, they don't care you are unaware of it, they don't care that it was posted somewhere else, they don't care about caches, they don't care about any technology thing you have to say..
Because in their minds... they have their guy, and maybe a network of people sharing illegal stuff.
Today it was Loli content... imagine if it was child porn... seriously.
I think the new strategy of content-policy should focus on publisher.
Publisher may declare the content-policy. For example.
<body>
<div class='h-entry'>
<div class='status__content p-name emojify'>
<div class='e-content' style='display: block; direction: ltr'>
<data class="e-mastodon-content-no-cache"></data>
<p>
<a href="https://pawoo.net/media/231Kh9YxVJaHJrb6xug" rel="nofollow noopener" target="_blank">
<span class="invisible">https://</span>
<span class="ellipsis">pawoo.net/media/231Kh9YxVJaHJr</span>
<span class="invisible">b6xug</span>
</a>
</p>
</div>
</div>
</div>
</body>
e-mastodon-content-no-cache
is a new parameter. Subscribers check it, and decide to stop cache images. Subscribers can cache images, but publisher say "DON'T DO THAT!", subscribers never do it.
We can consider detailed rule.
How do you feel it?
@furoshiki your idea requires trust..
Do you really trust everyone else on mastodon?
same as @Kovensky mentioned, Japan does not allow uncensored adult images (which is legal in the US) to be existing within it's jurisdiction, so the same problem happens for both sides.
Then question comes around to what kind of strategies are the Japanese hosts currently applying as I think the amount of the "particular" content flowing inbound to Japanese jurisdiction would most likely be higher than the amount of "particular" content going outbound just by comparing the volume of users.
Well, just not caching files won't solve the problem, at least here in Germany. It's about the information that is displayed to the user. If there is CP displayed under my url, police will knock at my door, and seize my stuff, because I'm responsible about what happens on my url. It's like that.
I must be able to control what is shown on my instance. There is a reason Twitter introduced national filters in 2012. They didn't do it because they had nothing to do, they did because they had to.
I will go as far as saying that Mastodon cannot exist in Germany in it's current form. At the end, it will be much safer to connect between instances in the same jurisdiction. So yes, I agree with @gellenburg - we need some kind of a two-way opt-in federation. But that's not all: I need to be able to control what's stored on my server, and which troots are shown on it. And I must be able to delete them from my instance upon request, if it's against the law in Germany.
Btw, our Federal Minister of Justice and Consumer Protection, Heiko Maas, is just about to issue a law against "hate speech", putting facebook & co in charge with hefty fines. You can read about it here: https://www.theguardian.com/media/2017/mar/14/social-media-hate-speech-fines-germany-heiko-maas-facebook
(sorry, couldn't find a more diverse english source. But all in all, it hits the nail on the head).
So this not a "what could happen if" kind of thing, this is actually a real thread.
@DanielGilbert I wouldn't be surprised if Facebook and others just withdrew from Germany or implemented geoblocking and refused to serve any content to anybody coming from a German IP address if that passes and if there's not some sort of "safe harbor" provision.
Laws like that will balkanize the Internet faster than any Copyright laws.
Edit to add:
I also find it hard to believe that Tutanota will be held responsible if I, as an American, send some neo-Nazi, pro Hitler, Holocaust denying propaganda to a German friend who has a Tutanota email address. As much as I despise those thoughts and materials, doing so is perfect legal here in America. Freedom of Speech and all that.
late back to this thread. Here in the UK:
@gellenburg: Sending it to him via DM is perfectly fine. Posting it on his public facebook wall will put the service owner (facebook) into trouble if he cannot delete it.
Internet laws in Germany are a very special topic. And from my impression - no one cares about it. Older people normally dont want to have to deal with that. And I'm talking about people in their 40s, btw. Unfortunately, these are the people making the laws atm.
@DanielGilbert This is extremely worrying as it will probably create the need for whitelisting of other instances on the German ones, excluding all non-whitelisted instances as the default behavior.
We, as a community, should maybe create a "legal taskforce" (God i hate this term, but I can't find a better one) whose job is to collect and lay out all the liabilities and laws for every country in which instances exist (meaning probably every single country on earth at some point), so instances' admin can refer to this as a baseline for what they're allowed to do and what they're not allowed to (not being biding legal advice, but just as guidelines). It is a tremendous enterprise, but I can't help to think that it will be necessary at some point if we want more instances' admin to take the plunge (and not risk prison and whatnots).
In parallel, there will be a need for technical solutions (and maybe another working group) in order to help admins comply with said legal requirements, and help protect both the users' privacy, but also the admins liability.
Just throwing stuff at the wall here, and see what sticks. Thoughts ?
@furoshiki I'm not sure I understand. Your local cache is only based on the size of the timelines of your instance users? If you have a small instance you will always have a small image cache. It's only dependent on your user's timelines, noone else
@danmademe You can delete remote posts on your server from the admin interface, so that just that instances can delete content they don't want. Also, if you suspend a remote user on your local instance, all of their posts will be deleted as well
also, look at @dannyob's post above. He's the International Director at the Electronic Frontier Foundation. Cops and prosecutors respond to rational incentives as much as anyone else. They don't want to prosecute cases that will make them look bad—CP cases are supposed to be their "slam dunks".
@DanielGilbert For statuses hosted on other instances, Mastodon works more like a cache system then like Facebook, where everything is centralized. this is (theoretically) treated differently, according to the EU rules. Not sure how that gets implemented into German law.
@andy-twosticks do you have any links to this? It goes against the EU directive in this area so I would love to see what their rationale was.
https://www.eff.org/deeplinks/2017/04/ninth-circuit-sends-message-platforms-use-moderator-go-trial
I would like the one feature I've mentioned to gargron before that I want to change a variable to keep ## (say 2) days worth of cached images that were not uploaded by my instance's users that came through the federated timeline that were not boosted or replied to (interacted with). Toots from people my users follow could be a different, longer variable - say 5 days cached images from people my users follow. Also, a max data limit just in case 5 days goes crazy with 1000's of 8mb posts by one bot. As close to hosting a personal drupal, discourse, wordpress, website, etc as you can with respect to in-instance user uploads, interactions, and statuses to care for only.
Something that I mentioned to Gargron on .social that I feel warrants consideration:
There are countless numbers of image hosting sites now. Imgur. TwitPics (!), Tumblr, Blogger, Giphy, etc.
One thing that can clear a lot (but not everything) up is just to forego hosting images and media on one's instance.
But that would mean implementing stellar support for oEmbed and maybe PubSubHub. But that way I could just link to a video hosted at YouTube or Vimeo or LiveLeak, or a GIF hosted at Giphy or ImgUr, or a photo hosted at Flickr or 500px or DeviantArt or Tumblr and save everyone's bandwidth and storage.
Plus, I would think that it would help Admins out legally (except you Germany) if the site was simply embedding content that was hosted on a third-party.
[email protected] here
A universal solution is impossible - too many regions, too many conflicting requirements. Google couldn't even solve this problem, much less the FOSS community. The way I see it, there are two high-level things that can be done:
(1): One tool that would be enormously helpful is an instance region flag on installation. If I specify that my instance is in the US, or ZA, or CN/JP/IR/ZW/whatever, a couple of bullet points about relevant content hosting legislation in my region will help me make better decisions as an admin.
Right now I think there's a real danger that people will spin up instances without understanding, at all, what laws apply to them. It doesn't need to be super-detailed, but just a few words on what's legal in your region would be helpful. Admins can, of course, go totally renegade and disregard them, but then at least Mastodon (the software) is doing its best to inform them of their responsibilities.
(2): I think admins should have several abilities:
While I am personally in favor of open federation and individual responsibility, it's also pretty overwhelming if you're committed to managing a community, and have to take responsibility as an administrator (to some extent) for what the users do on your platform. Anything that can make it easier for an admin to navigate the legal minefields unique to their region will be helpful.
@woganmay I agree with most of your points, but for the last bullet point of (2), I would argue that this is a very bad idea. Auto blocking basing on the region (or any other criteria) does nothing to prevent illegal contents, and is a big obstacle to free speech. We cannot trust the admins of the blocking server to communicate on this issue, so the user may not notice the block at all (perhaps a dark pattern, since users will assume Twitter-like global broadcasting).
As I am reading this discussion, Suddenly I feel like this is 15 years ago in P2P technology all over again. Why nobody learned anything from the history.
15 years ago, we are so hyped at building P2P distributed mesh network on the Internet.
We implemented File sharings, Chats, Forums, Blogs, Web pages and everything on top of that distributed P2P mesh network.
The result, we face the exactly same issues we face today.
Copyright infrignement, Child porn, and other illegal datas(Nazi symbols in Germany for example) are spreading all over the place.
Distributed cache burden us as the network grow so the cost of joining the network in terms of computational power, storage, and bandwidth become too expensive for newcomers.
If we seriously tried to solve those problem, we requires thousands of full-time employees, money, politics and hardware comparable to Twitter, Facebook, Google or Microsoft. We'll just become one of them.
At that time, there will be another people who think mastodon is too oppressive to the user so they start developing alternatives which promise "A decentralized alternative to existing platforms, it avoids the risks of a single mastodon community monopolizing your communication."
I agree with @Tryum
If hotlinking media from instance A is unfair when A is using a cheap server and instance B sends 5mil users fetching media from A... so the solution is simple encryption. For each piece of media, store a 4 byte nonce on instance A, then encrypt the media using SHA256(nonce || content-identifier) symmetrically on the client side.
So the user would fetch encrypted blob from their instance cache (or from instance A if it isn't cached). Then fetch the 4 byte nonce from the hotlinked instance. Decrypt the content locally with AES. A single SHA256 hash and AES decrypt would not be too slow, unless they were viewing the content on a potato.
It would at least be slightly better than having an unencrypted CP image on your computer.
@dabura667 You really want to talk to a lawyer before implementing something like that. The law does not work like technology does. Encrypting content could plausibly be taken as deliberate action to hinder the enforcement of the law in a situation like this. You might be better off with everything in the clear and cooperating with law enforcement if and when they ask (presuming they're interested in the source of the material, not servers it may have incidentally crossed). This isn't a legal opinion, I'm just saying that might be the case and you should really talk to a lawyer to figure that out.
Totally agree with the encryption idea. Although judges could argue that I can easily access the keys and therefore unencrypt the data. Are there countries out there where encryption is illegal?
Unfortunately, it's Easter here, and family duty calls. I will have another look on the issue tomorrow.
Are there countries out there where encryption is illegal?
@DanielGilbert Yes (for various gradations of "illegal").
That's quite a lot. o.O
Did some small research:
From what I've found so far, I don't need to block pro-actively, but upon request. So a way to block troots from foreign instances might be sufficient for now - plus a solution for the cache, I guess.
After some time to think I've realised that a good solution to this problem is more visibility of reporting functionality.
Reporting should work like this. If a user on my instance has something reported, then I as the admin can take responsibility for it. If that post is on a remote instance then it's reported to the instance admin.
I'm not sure if this is what's happening now. I'm sure it is.
But I think that if a user on my instance has reported another user on another instance then I also want to know so I can review it and potentially apply a domain block or a user block.
In this the functionality could be expanded so that other types of blocks are available.
Such as:
I really think this problem could be handled better if there was better tools available to admins, or even having another class of users, who act as moderators. Which would help large instances.
Why I think this is a good solution? Because then the users will have power to control what they find offensive and whole instances can become niche in what they allow.
@nightpool I don't have any links re: prosecuting over cache data, but (a) the police of any country don't tend to pay much attention to any conventions that they do not have to pay attention to and (b) EU conventions will soon not apply in the UK anyway?
Re: encryption: Not sure where people are going with this. UK law allows the police to prosecute anyone who refuses to give up the password, AND allows them to prosecute you anyway if you genuinely don't have it. (Maybe even if they only think your 160k block of random numbers is hiding something illegal, at least in theory.)
Even if you enabled genuine e2e encryption of private toots between users, that would not help the two users -- and the admin would have to trust that no user had ever forgotten to set a toot private?
https://github.com/tootsuite/mastodon/pull/1865 seems the mastodon at latest version has resolved this problem. Is domain based media blocker the best solution? pawoo.net team think this is a good idea.
Well, it's a bit rough right now, but at least it permit peoples to communicates :3
Being able to "prevent caching" while "not muted" might be cool too :o
I think this topic has steered away from the original point of this issue. In terms of media caching strategies, I don't really have any solid advice other than some degree of control over where we cache from.
This topic is now closer towards enhancements to the domain block systems. Naturally I think that this is a worthy and important discussion to happen. Domain blocks need finer grained control, so as I suggested before a good place to start is to add a couple more types of blocks.
Cache blocking being the focus of this issue.
Could be this handeld in a way where you don't have to censor lolicon artists? We suffered enough for no reason at all, just by expressing ourself in form of drawings.
Maybe forbid certain ip ranges from viewing certain tags, as example #lolicon ? if I'm not mistaken as much illegal possesion of lolicon and distribution can be in places like australia, united kindom, canada, new zeland and france (this is not the mayority of western countryes btw) it's not illegal to be viewed online.
@westernotaku Please don't do IP or region based restrictions. It won't work with VPN and we suffered enough for no reason at all, just by living in a country without choice.
So, well, how do we do ? Right now there is a "bandage" with "block caching but must mute"
But the wound is still wide open, we still can't communicate with instances that haven't the same legal juridiction than our...
I do think the problem is more about storage / bandwidth rather than legal issues...
For me I just would like an archiving mechanism with which old content (e.g. older than a year) can be moved to some other storage, for example, a remote FTP server with abundant storage space, or an rclone encrypted Google Drive Unlimited remote, thus these less-viewed files can be safely removed from the main server to make space for more newly generated content (while they still can be viewed on demand). Before we get to an agreement on legal issues, implementing such mechanism, to me, will solve more urgent problems.
@DanielGilbert
It would be illegal to have CP or CP-like images in the non-volatile cache (aka RAM) here in my jurisdiction.
Disputing this "aka RAM" part here. "Non-volatile" memory refer to storage devices that retain the information after power loss, and is just the opposite of what people have in these SDRAM slots of motherboards. Yes, there are non-volatile types of RAM devices, but people would normally call them flash storage.
(This isn't to say that caching such images in some tmpfs ramdisk is always safe -- you may eventually bump into some swap space and accidentally write it onto the disk, for example.)
@Artoria2e5
My fault. I meant "volatile".
https://www.heise.de/newsticker/meldung/Urteil-Kinderpornos-anklicken-ist-strafbar-931446.html
Translation of the relevant part:
"Already looking at child porn on the Internet is punishable. This follows from the existing legal situation and was now confirmed for the first time by an "Oberlandesgericht" (Higher Regional Court). Also the short-term download into the working memory, without a manual storage, brings users into the possession of the files, is stated in the reasoning of the OLG Hamburg from today's Monday."
Now, one might argue that a server cannot look at files - but I don't want to discuss that with any court here in Germany. ;)
Has the core team considered automatically deleting posts/images/videos after a certain time. Eg 30 days, 2 weeks, 1 week, etc. Or a setting for instance owner to decide. I know that this may be a digression from the discussion here, and contrary to current Mastodon functionality.
Am raising this because I can imagine the load that instance owners are bearing. Even if an instance owner decides to stop accepting new users, the existing users' new connections with more and more users outside of that instance may already create new exponential storage / bandwidth load on that instance.
As a large majority of instances are basically self-funded, the growth in storage / bandwidth might force some instance owners to pull the plug. That kinda will start a consolidation phase where only the instances with the deepest pockets survive, and reduce the diversity of people/ideas/content that Mastodon is so good at.
Also, the appearance of Snapchat, Instagram/Facebook stories etc, may suggest that somehow people are ok with the idea that their created digital content/data do not have to persist and exist permanently. And with the fast moving info-developments now, old posts might not be relevant to someone's followers anyway.
I'm a 2-week old user who believes in the mission of Mastodon.
PS: btw I will just use the chance to say thank you very much to Mastodon maintainers and contributors =) It is just amazing a project of this scale and rapid growth is supported through an open-source community of contributors.
We've implemented some new features since this issue was opened, to help deal with the problem. There are also a couple open issues for more technically specific approaches, so I believe this issue can be closed.
Most helpful comment
@Gargron Hi, I'm pawoo.net founder (pixiv inc.).
We understand that mature images uploaded by our users are potentially problematic from the legal view, since it could be hosted in servers in other countries.
However, we would like to protect our users' works as possible unless they are illegal in Japan. We are caught in a dilemma.
We, as pawoo.net admin would like to obligate users to flag mature contents as NSFW. And as for images in server, we propose Mastodon...
We are in compliance with the law of our country, and deal with our content.
We spare no technical effort to resolve this problem.