It appears that the distribution model of Mastodon is similar to Wordpress blogs. Host your own server, update it independently. The problem that Wordpress blogs have historically had is that people setting them up don't spend the time to patch them religiously. This has left Wordpress as infamous for being an insecure platform.
When there is an exploit discovered that dumps every message and profile information of users on any instance, is that going to stay live until every server deployed is patched?
The Security page doesn't seem to have any advice related to keeping the server updated regularly, ideally without an inordinate amount of work from the administrator.
Maybe these concerns aren't realistic for some other reason, but I'd love to know how or why.
The Security page doesn't seem to have any advice related to keeping the server updated regularly, ideally without an inordinate amount of work from the administrator.
It's under installation instructions instead: https://github.com/tootsuite/documentation/blob/master/Running-Mastodon/Docker-Guide.md#updating
Updating is usually just git fetch && git checkout NEW_VERSION_HERE
and restarting Mastodon. Sometimes two or three extra commands described in that version's release notes.
Something like 80% of the network is on versions that were released within the last two months: https://instances.social/network None of those versions were urgent in the sense of fixing critical exploits, they were simply continuous improvements. So I think if there was an urgent new version, the upgrade times might be even faster than that.
The joinmastodon.org page filters the instances it displays by a minimum version (at the time of writing, 2.1.2), because we want to ensure people sign up on instances that are maintained regularly.
Because anyone can run Mastodon, it's hard to make any general statements here, but I hope the above observations were helpful.
I think the directory works somewhat well for new users to Mastodon in general, but I don't think that will help for servers which attain a userbase but stop applying updates due to bus factor or something similar. Those servers have already collected user data; unlisting them won't stop any sort of exploit. I don't see it likely that people are going to head to a popular server through any directory listing after a certain point. Especially when considering things out of most people's control, like SEO ranking and such.
I think the root of the Wordpress issue is requiring user intervention to maintain the patching. Is there some sort of auto-patching mechanism that could be developed to prevent that Wordpress problem?
I feel like another reason behind the WordPress problem that Mastodon likely doesn't experience is that WordPress is more "set it and forget it" while Mastodon server owners are both highly technical and actively involved in their communities.
That said, I think it'd be neat to have an auto-updating mode. I want to get into Mastodon development, so I'd be super willing to contribute to this feature if there's demand for it!
This is always a slightly annoying thing with open source projects. Realising a patch for a discovered exploit by its nature reveals the (potentially unknown) exploit to others.
Should there be a major vulnerability, I imagine we might want to take the approach of giving out advance notice, saying "there is a critical vulnerability, and we will be releasing a patch at this date and time. please update as soon as possible after this date."
I understand this however, updates should always be done manually. Any one can host an instance. Whenever someone decides to host a userbase, they should be considerate and educated enough to know how to properly manage the software and the machine. A poorly educated admin will quickly put peoples lives at risk. Please don't encourage easy as 123 changes. It SHOULD be hard to setup, and shouldn't necessarily be easy to update either.
It's not only Mastodon that's at risk to vulnerability here. It's the other services and or operating system that run the platform that may be at risk too.
Not flaming the op. Totally understand the concern. At minimal, for our healthy future and open ecosystem (ie not WordPress, not Facebook, not birdnet) , we should encourage an admin to at least be knowledge enough to understand version control software like git.
I'm not really attached to nor promoting specific solutions. I'm noting what I see are problems, but I can only rely on the people closer to the project to come up with answers.
I don't think technical barriers of entry are a realistic filtering mechanism. Someone could create an AMI or other wrappers that let these servers get created without a lot of technical expertise; no amount of hoops are going to stop that. Additionally, with that kind of 3rd-party convenience, you may also get a Sourceforge problem.
There's definitely some stuff to be said about responsible disclosures and such, but metrics on which servers have been patched and which ones haven't is publicly available. The users' data attached to that server are sitting ducks until the administrator decides to check their email for a CVE. At what percentage of applied patched would the maintainers be comfortable allowing the researcher to talk about the vulnerability in detail? None? 10%? 2 weeks?
Again, maybe these aren't real problems, but I haven't really seen an answer that refutes the core concern of admin laziness, only that "Mastodon admins aren't lazy".
At what percentage of applied patched would the maintainers be comfortable allowing the researcher to talk about the vulnerability in detail? None? 10%? 2 weeks?
I'm not sure that type of talk would matter too much - to patch the vulnerability, there would be code changes in this repository, visible to all. In many cases, anyone looking at those changes, with an eye for spotting holes, would find them.
I'm not sure that type of talk would matter too much - to patch the vulnerability, there would be code changes in this repository, visible to all. In many cases, anyone looking at those changes, with an eye for spotting holes, would find them.
I think that's a very fair perspective on the matter, although I would say that an attackers work is done for them if an announcement and description of the vulnerability go out shortly after the patch becomes available with little time for server admins to apply the necessary update.
At the risk of bikeshedding this discussion: with a little bit of tweaking, Docker essentially becomes an auto-updating mode.
A few of us have moved the assets:precompile
into the Docker build stage (to preserve production server resources) and moved Docker builds onto Docker Hub (automatically triggered by pushes to my fork), so deployment is speedy, cron-friendly, and refreshingly idempotent:
docker-compose pull && docker-compose run --rm web rails db:migrate && docker-compose up -d
(See Dockerfile and docker-compose.yml in https://github.com/vulpineclub/mastodon for what is running my instance... you'll want to compare against glitch-soc and not tootsuite.)
Get db:migrate
moved into the runtime startup, and the front-end nginx moved into Docker (with automagic Let's Encrypt??!!), and installation becomes "install docker-compose, grab docker-compose.yml
and .env.production.sample
, tweak it, and there we go."
I _vastly_ prefer this model over WordPress-style self-modification updates, and I'd recommend it over reinventing Yet Another Automatic Update System (as long as bare and Vagrant deploys still work). If there's interest in this, let me know and I'll work it with a higher priority.
Most helpful comment
At the risk of bikeshedding this discussion: with a little bit of tweaking, Docker essentially becomes an auto-updating mode.
A few of us have moved the
assets:precompile
into the Docker build stage (to preserve production server resources) and moved Docker builds onto Docker Hub (automatically triggered by pushes to my fork), so deployment is speedy, cron-friendly, and refreshingly idempotent:docker-compose pull && docker-compose run --rm web rails db:migrate && docker-compose up -d
(See Dockerfile and docker-compose.yml in https://github.com/vulpineclub/mastodon for what is running my instance... you'll want to compare against glitch-soc and not tootsuite.)
Get
db:migrate
moved into the runtime startup, and the front-end nginx moved into Docker (with automagic Let's Encrypt??!!), and installation becomes "install docker-compose, grabdocker-compose.yml
and.env.production.sample
, tweak it, and there we go."I _vastly_ prefer this model over WordPress-style self-modification updates, and I'd recommend it over reinventing Yet Another Automatic Update System (as long as bare and Vagrant deploys still work). If there's interest in this, let me know and I'll work it with a higher priority.