We should overhaul our binary release process.
There should be a single website that contains all binaries, including the soljson nightly binaries.
If possible, this website should be built via jekyll on github-pages. Since github-pages has a size limit, the soljson nightly binaries (or at least the older nightly binaries) should be stored via ipfs and the jekyll website should contain http redirects to an ipfs gateway.
The website should be backwards-compatible with solc-bin.ethereum.org/ (built from https://github.com/ethereum/solc-bin ).
The website should also be fully exposed via ipfs and on a subdomain of solidty.eth via ens.
It should contain all the binaries we usually put on a github release page in addition to the files in solc-bin.
The nightly binaries should be pushed directly from circleci (preferred) or travis or github actions. If possible, we should avoid storing (even an encrypted) access key on circleci or travis, so maybe github actions is the most viable solution there.
Maybe we could already prepare for the macos binaries being available for both intel and arm - so maybe we could find a generic scheme like distribution-buildType-processorArchitecture.
We should also re-add the old nightly builds that were removed here: https://github.com/ethereum/solc-bin/commit/e134bbaee4d1f0e87ffbcfa27fb072792eada547
Finally, we should try to add binaries for older versions (especially macos binaries).
I researched our hosting options apart from Github and IPFS.
Hosting nightlies:
Hosting binaries outside of github:
For reference, the requirements posted by @chriseth on Gitter:
(actually the "binary" process)
- at best travis should not be able to push into solc-bin, everyhing should be done in a pull manner instead, that never overwrites certain files (something that just happened some hours ago actually)
- we should have a repository of release binaries as a directory than can be stored on ipfs and also delivered via http
- this repository should contain: soljson wasm and emscripten builds including nightlies. static binaries of various platforms (linux, macos, windows)
- at some later point, we migth want to create a nice human-targeted html page on top of that, but that is not required for now
- solc-bin almost does this, but we are hitting a size limit
- potential solution: When solc-bin is rendered via jekyll to github-pages, we could "move" some files somewhere else. For example, assume that someone constantly pins the whole directory on ipfs, then we can have redirects to an ipfs-gateway.
- pinning to ipfs could again be done in a pull fashion: For example the computer in the office could always have a check out of the repository and serve that via ipfs
- in addition, we could have an ens name like binaries.solidity.eth point to this ipfs repository
- we have a weird setup with the "bytecode comparison" where travis and appveyor store certain test run outputs in a certain repository. On that repository, there is a job that compares them to see if we have any platform-dependencies. This could be changed into a pipeline-based job on circleci that just runs on each commit.
- for release builds, I'm fine with manually adding a commit to solc-bin
- for nightly builds, we colud have a job running as a github job in the solc-bin repository that just somehow fetches the latest artifact from the circleci servers (and maybe performs some sanity checks)
This is the summary of today's discussion with @chriseth:
solc-binsolc-bin that pulls nightlies from a known location (e.g. CircleCI) rather than let external services push to it.solc-bin.solc-bin or a fresh one).solc-bin is an option too.solc-bin.solc-bin. Use whatever is more convenient.list.json in solc-bin already contains Swarm hashes. We could add IPFS addresses there too. The update script should add them.update scripts. One will only add the latest nightly to the index. The other will recalculate the whole index (just as it happens now). (comment by @chriseth: Since the nightly script needs to re-generate list.json anyway, it is probably better to have a single script just with a switch to tell it whether it should update solc-latest.js or not)solc-bin.solc-bin that will accommodate new files.solc-bin. They contain only file names though so the domain and path parts are likely hard-coded in tools.bin/ and wasm/) so we could try to extend that convention in some way.bin/ should not be a big problem. But huge directories are a performance problem on most filesystems so some structure would still be desirable.solc-bin are pushed there from Travis. Travis also pushes the binaries to the release page. And it pushes to the bytecode comparison repo.solc-bin must be the exact same binaries as the ones on the release page.solc-bin exchange repository and nothing else. Artifacts should be added manually to the github release page and also to solc-bin (except for the nightlies).Actually, I don't think CircleCI should have any part in the release process - all of our builds and tests are run in docker images anyways, so it doesn't matter on which platform they are run. The only advantage of CircleCI is that it's fast, but that's mainly an argument for PRs, not for Releases (which should be safe, reliable and robust, but don't need to be particularly fast).
We need to trust github anyways, there's no feasible way around that, but we can reduce whatever we have to trust on top of it.
So instead of dealing with permissions for or access to CircleCI, let alone creating AWS instances that introduce even more points of attack in the process, I'd just have github actions build releases and pre-releases, pushing them to a solc-bin-like repo (or alternatively, maybe for releases not nightlies, merely create PRs to it, having the actual release binary branch be branch-protected) and adding them as assets to the github release page as well.
The main issue about this is that it should be failure-robust, but that's no argument against having actions (etc) automate this. As long as at any failure of the actions, the failed parts of the process can just be repeated manually, automation makes this more robust, not less robust (due to having a dry run on the PR to release before merging and by preventing human error).
In the best case the first step of a release (after preparations on develop) would be creating a PR from develop to release, which would already run all tests and verifications once. Any failure there should be fixed back on develop. Then upon hitting merge to release, the builds and tests are run again, github actions build all binaries, create a release page draft with the required artifacts and create a PR (resp. one PR per platform) to solc-bin adding the binaries.
Also, if that's a concern: there should not be much duplication between the setup in github actions and CircleCI for building and testing due to this - we should organize the build and test runs independently of the platform they run on anyways, so we should have scripts in the repo for each step, meant just to be run inside our docker image - so we can just reuse the same scripts on github actions or CircleCI (or whereever we may want to).
So much for my opinion about getting the releases both to the github release page and to a repo like solc-bin - publishing further from there to IPFS, or something equivalent to the current gh-pages on top of that, however, is independent of this part.
@ekpyron I don't think we should have server-side automation for the release at all. It is just too prone to failure. My idea ould be that someone with write permission to the repo runs a script locally. This script queries the circleci api for the binaries for a certain tag and downloads them. Then we can test them locally if needed. If everything is fine, we can upload them to the release page and create a commit that adds them to solc-bin. Using automation will save us maybe 30-120 seconds per release if everything goes well, but it costs us at least 30 minutes if something goes wrong.
The main server-side automation we need is for the nightly emscripten builds. This nightly build can possibly be created by a github action inside the solc-bin repository (not the solidity repository because it would not be able to easily push to the solc-bin repository) - this is written down at the beginning of the proposal.
I really don't understand the argument. The current automation is indeed prone to failure, but that's for different reasons, because it's too many systems interacting with each other - in what way is having to run a script locally less error prone than running it automatically?
@ekpyron
let alone creating AWS instances
It's more about S3 (so just storage buckets) than machine instances. But yeah, the preference is to avoid having any intermediate storage if we can get files from CircleCI easily. And I'm pretty sure we can.
Actually, I don't think CircleCI should have any part in the release process
I do like the idea of automated releases but it's really orthogonal this task. The focus of this task is on solc-bin and I'm not really going to touch the build process for releases. I'm going to modify the update script from solc-bin to pull them from wherever they're stored (so CircleCI right now; with our without the intermediate repo/S3 bucket). If we ever change the release process to what you proposed, solc-bin will only need a small adjustment. @chriseth Maybe it would actually make sense for the solc-bin stuff to be performed after a release so that we can just get the release binaries from Github's releases page?
I am going to move nightly builds from Travis to CircleCI but it's mostly a matter of reusing the jobs that are already there and making them run on daily schedule in addition to PR builds. So not much extra work even if we later decide to move them to Github actions. We can't leave them as is anyway - we either need to debug file truncation on Travis or just sidestep the problem by moving somewhere else.
@cameel Yep, building the binaries and hosting them are indeed somewhat orthogonal, so no reason for not preparing the hosting part right away - still in the end we will have to address both. And I'm not yet convinced at all by @chriseth's position of not automating it (especially, if it's automated properly, i.e. in such a way, that you can still fix things locally and run the very same script locally anyhow, if it fails, if ever necessary).
Let's have a call about this on Wednesday! For me, it's just about having full control and giving away as few permissions as possible.
@cameel if we pull the nightly binary from circelci, you can just search for the latest successful run on develop - no need to actually re-build anything.
if we pull the nightly binary from circelci, you can just search for the latest successful run on develop - no need to actually re-build anything.
Good point. We don't even have to schedule a new job. We can just pull in what the existing jobs build.
I have answers for our questions. Some unfortunately aren't good.
Unfortunately github does now allow proper HTTP 3xx redirects from GH pages.
From Redirects on GitHub Pages:
For the security of our users, GitHub Pages does not support customer server configuration files such as .htaccess or .conf. However, using the Jekyll Redirect From plugin, you can automatically redirect visitors to the updated URL.
(this is old article about Github Enterprise but, judging by old, dead links that now redirect to the main help article about Jekyll, it used to be stated on that page too).
Anyway, the only supported way to redirect is HTTP-REFRESH meta attribute using jekyll-redirect-from plugin but that does not cut it for us. It's basically a HTML page with the redirect location in it and I don't think it will work for anything except web browsers.
Looks like our only real option is to point the domain to a small nginx instance and configure it to do the redirects.
Symlinks on GH pages do work (see for example https://github.com/s4y/gh-pages-symlink-test). But it looks like they indeed create a copy of the file. I couldn't find it explicitly in Jekyll docs (they only say that in safe mode symlinks are ignored) but this old PR clearly shows that the files get copied in production mode or in safe mode (FileUtils.copy_entry() in Ruby preserves symlinks while FileUtils.cp() does not). I haven't tested it yet but I'm pretty sure it will count against the size limit.
From Content Type set by HTTP Gateway #152:
HTTP Gateway does content-type sniffing based on golang.org/src/net/http/sniff.go and file extension. js-ipfs uses similar setup.
The above is a proposal to add a manifest that would allow us to set arbitrary content types. It's not even accepted yet. We cannot set the types ourselves until it gets implemented yet but we can control it to some extent by using the right file extension.
@chriseth
it turns out that the ipfs gateway actually does use
text/plainfor our.jsfiles :(
but I heard there might be a workaround, let me try that
This does not work: https://cloudflare-ipfs.com/ipfs/Qmad6iesaR5FQ45RRtMrt2Fa1EYDuq1oGoMJJB6beMHESn
Looks like this is because this object does not have a file name so the gateway falls back to detection based on content. Since the .js file is composed mostly of encoded binary data, it thinks it's just plain text. Command-line file utility says the same thing.
The ipfs object get command shows that there are no names in Links:
ipfs object get Qmad6iesaR5FQ45RRtMrt2Fa1EYDuq1oGoMJJB6beMHESn | jq
{
"Links": [
{
"Name": "",
"Hash": "QmXTFox8jc3duxA4snM3zJokR6L1GJSCUE17v8UhAa5XFF",
"Size": 262158
},
{
"Name": "",
"Hash": "QmWcXmntJyRn8o6CHBYuPQkop6u6P1P4VLHpYkdLY6CAC3",
"Size": 262158
},
...
We just need to make sure that we create objects with file names (rather than just unnamed blocks). One possible problem is that I'm not sure if we can get an ID of such an object knowing only the name and the hash of the file.
I haven't seen any official information about file size or even bandwith limits for Cloudflare and ipfs.io gateways. I have seen an issue in go-ipfs to implement a configuration option to allow limiting file size - so looks like such a limit may not even be implemented in the main client at this point.
I see that when you have a link to an artifact you can freely download it without logging in.
As for getting the link, there's an API endpoint for getting links to artifacts of the latest build. The only complication is that it requires an API token. For the script you run manually, you'd have to get your personal token from CircleCI once and then always specify it as a parameter (or we could make the script fetch it from a local config file). For nightlies we'd have to add the token as a secret in repository settings (see Creating and storing encrypted secrets).
Ok, but that actually means that since jekyll cannot do http redirects, and we have to fall back to our own server anyway, we don't necessarily need ipfs after all, right? We could just have a server that has built-in hosting of plain directories. This would then also handle symlinks properly and the only "rendering" process is regularly doing a git checkout without the .git subdirectory.
Does the token allow to do anything else on circleci apart from fetching the artefacts?
Ok, but that actually means that since jekyll cannot do http redirects, and we have to fall back to our own server anyway, we don't necessarily need ipfs after all, right?
A server that only does redirects would still be much lighter on bandwidth and storage than on one that also serves the files. So a redirect server + IPFS is still a viable alternative. One server that does both would probably be easier to configure and more reliable though (less moving parts).
That is true! I just checked the permissions again and it is really horrible, both on the circleci and the github side.
What about just hosting everything in the office ourselves after all - the main issue with that is bandwidth concerns, right? But all the content is long-lived and static, so we could just buy some CDN cache thing in front of it?
Regarding mime-types and IPFS:
While https://cloudflare-ipfs.com/ipfs/Qmad6iesaR5FQ45RRtMrt2Fa1EYDuq1oGoMJJB6beMHESn serves as text/plain,
https://cloudflare-ipfs.com/ipfs/Qmad6iesaR5FQ45RRtMrt2Fa1EYDuq1oGoMJJB6beMHESn?filename=soljson.js serves as text/javascript (EDIT: and I just checked: the latter correctly loads into remix)...
The fact that gh-pages use copies for symlinks actually explains why we hit the size limit just now - fixing the short commit hashes introduced a lot of symlinks (for being backwards compatible when renaming the releases)...
What about just hosting everything in the office ourselves after all - the main issue with that is bandwidth concerns, right? But all the content is long-lived and static, so we could just buy some CDN cache thing in front of it?
I think both options fit our requirements. Hosting on S3 will probably require less maintenance while the office computer gives us more control.
Regarding mime-types and IPFS:
Yeah, looks like that would solve the MIME type issue with IPFS.
@ekpyron Here's how I see the tasks here after all the discussions:
solc-binsolcjson.js nightlies into solc-bin from CircleCI. Remove the nightly job from Travis.solc-bin and update the index so that it can be manually reviewed, committed and pushed to solc-bin.solc-bin to make sure that the committed binaries match the ones on the release page in solidity repo.solcjson.js nightlies into gh-pages branch of solc-bin; also add release binaries for other platforms.solc-binsolc-bin either to Amazon S3 or to office computer with a CDN in front of it.solc-bin to CDN/S3: either push (a script in CI) or pull (a script on AWS Lambda/office computer).solc-bin to IPFS in addition to CDN/S3Maybe it's easier to discuss this in the call later, but for the record:
* Create a Github action to pulls `solcjson.js` nightlies into `solc-bin` from CircleCI. Remove the nightly job from Travis.
I still think there should rather be a github action in the solidity repo that builds, tests and pushes releases to solc-bin (I read somewhere, that that won't work, but I think that's wrong and it will work, if we use some github auth tokens - and I don't see harm or danger in storing github credentials on github, since github has access anyways...).
My reasoning for this is that we have to trust github anyways, but there's no need to additionally trust CircleCI. If github is compromised, it can just serve the wrong code to CircleCI, so nothing we can do about it - but if github is fine and CircleCI is compromised, we have a problem, if we build releases there - and that can easily be avoided.
Also I'd argue that other than regular PR test runs, it doesn't matter if release test runs are slightly slower (which would be the only argument against running them in github actions I can think of).
Maybe this also relates to the other point:
Add a CI check on PRs in solc-bin to make sure that the committed binaries match the ones on the release page in solidity repo.
That depends on how the binaries on the release page are built. For emscripten builds this will be fine, but I'm not sure the other builds will be reproducible just yet, so I'm not sure they will be identical. We'd at least need to check this first - we may have reproducible binaries if built with the same script in the same docker image, but we also may not.
So in general I'm still not a fan of using CircleCI and artifacts on it for anything release-related or even for the nightlies for that matter. I'd be fine with doing it, if you all want to - I don't want to block this :-) - but I'm saying it's easily avoided and I'd argue for avoiding it :-).
EDIT: But yeah - in general I'm pretty fine with any working solution to 2. (resp. 3.), although I'd tend to prefer to avoid cloud stuff and to have things primarily on IPFS - or at least something that can be easily migrated towards that.
However, I'm not that satisfied with 1. yet :-).
I read somewhere, that that won't work, but I think that's wrong and it will work, if we use some github auth tokens
My problem with storing tokens was that I've seen threads where people mentioned storing them in a private repo as the only solution - which is not a good place for anything secret in my opinion. But since then I've noticed that github has a feature for adding encrypted secrets in repo settings so maybe that was just some old workaround that's no longer relevant...
Another issue with tokens is that they're something that could leak if not secured properly - any solution where secrets do not have to be explicitly passed around is an advantage in my book.
For emscripten builds this will be fine, but I'm not sure the other builds will be reproducible just yet, so I'm not sure they will be identical.
Right now @chriseth wants release binaries to be uploaded manually so it's easy to ensure that they're the same. Just upload the same files to both places :) This would be just a sanity check to ensure you did just that.
And if we automate publishing releases, I think it would just be a matter of having the same CI job upload both to solc-bin and to the release page.
Yeah, I meant encrypted tokens accessible by github actions only - storing tokens in a private repo would be weird, that's for sure :-).
Alright - I'd say the easiest way to have one CI job uploading to both solc-bin and to the release page is github actions :-).
And (primarily @chriseth): that CI job could just create a release draft and a PR in solc-bin (the release branch of solc-bin should probably be even be protected and there should be a separate, non-protected nightly branch) - and it could be failure resistent (try to continue past errors e.g. for one platform only or in tests and just report the failure in the draft/PR instead of aborting). That way I'd argue this will be more reliable than doing things manually (the fact that we're considering a CI job to check, if we messed things up manually kind of confirms that...)
Here's a summary of what we discussed today. I'm not sure I got it all unfortunately. Also, if something is wrong, please correct me.
tl;dr: In this task we'll continue with S3, manual releases and getting binaries from CircleCI. We may automate releases later, as a separate task.
release and nightly branches in solc-bin, with release being protected.CircleCI API
I can get binaries from CircleCI with a few lines of curl and jq commands. It's not perfect but should be good enough for us.
API v1.1. will stop being supported at some point in the future:
CircleCI expects to eventually End-Of-Life (EOL) API v1.1 in favor of API v2 as more API v2 endpoints are announced as stable. Further guidance on when CircleCI API v1.1 will be discontinued will be communicated at a future date.
solc-bin so that it gets pulled in.Our file list is 0.3 MB by the way. The one of go-ethereum is 8MB.
Oh, sorry. I must have misheard that.
Just for the record, I get:
$ time { for i in $(seq 1 10); do wget -q http://solc-bin.ethereum.org/bin/soljson-v0.6.9+commit.3e3065ac.js -O /dev/null ; done ; }
real 0m27,881s
user 0m0,045s
sys 0m0,205s
$ time { for i in $(seq 1 10); do wget -q https://cloudflare-ipfs.com/ipfs/Qmad6iesaR5FQ45RRtMrt2Fa1EYDuq1oGoMJJB6beMHESn?filename=soljson.js -O /dev/null ; done ; }
real 0m28,938s
user 0m0,538s
sys 0m0,253s
That's a rather crude benchmark of course... but still it doesn't seem to make much of a difference. (Not that that's too surprising.)
EDIT: Fetching directly using an in-browser IPFS implementations instead of a gateway is rather disappointing, though - I didn't get below 2 minutes (for downloading once). Pre-compressing could probably reduce this to less than one fifth of that, but that's still 10 times slower than the gateway, unfortunately...
I have created a Github action for pulling in nightlies into solc-bin.
Turns out to be a bit more work than I expected and there are still a few problems to be solved. Here's a draft PR if anyone wants to take a look: https://github.com/ethereum/solc-bin/pull/29
There are some things that need further discussion. Maybe we need another call (not urgent though - none of this is a blocker for me at this point). Below I gathered the main issues from my discussion with @ekpyron on Gitter, with relevant quotes, for later reference.
We need a final decision on eventually replacing manual releases with an action that generates PRs. We agreed not to do it in this task but do we want it in the long term?
@ekpyron
My favorite solution for that would be to have a action in the solidity repo run on commits to the release branch and create a PR to solc-bin. But as far as I remember we decided to have no automation for releases for the time being and create manual PRs to solc-bin for now. However, we should probably talk with @chriseth about it again... especially the need to have a local checkout of solc-bin and run the update script to create such a PR would annoy me, but so far @chriseth always does all of this anyways, so...
Last time we decided to disable github pages from solc-bin once we switch to S3. But maybe we should freeze the gh-pages branch instead and keep it for some time so that tools relying on the github domain rather than solc-bin.ethereum.org keep working? Even solc-js uses it. If so, for how long?
@cameel
How about "freezing" the gh-pages branch so that it keeps working indefinitely and only adding new binaries in a new branch started off it?
I have already created a master branch just for that purpose (I mentioned it in the PR).
@ekpyron
Hm... we should discuss those details with @chriseth and maybe actually even with remix and truffle and whoever mostly pulls the binaries from there...
But yeah - if we move away from github pages... but we keep the github pages that exist frozen as they are now... then that satisfies all backwards compatibility we need...
@ekpyron
Even https://github.com/ethereum/solc-js/blob/master/downloadCurrentVersion.js fetches the lists from github... which we by the way should really change...
Yeah... we might have to keep pushing the binaries to gh-pages at least until 0.7...
After that we could book it as part of a breaking change in solc-js...
Do we want to keep the symlinks indefinitely or are we ok with changing the schema at some point?
@ekpyron
Yeah - a reason for a lot of the symlinks in the repo is fear of breaking hardcoded URLs... but those will probably point to github.io anyways... so less need for backwards compatibility if we migrate to a new URL anyhow...
Serving brotli-compressed files on IPFS could considerably speed up loading of these binaries in the browser directly from IPFS. This might encourage tool developers to use it. Is it worth our effort?
@ekpyron
I wonder whether we should also pin a brotli-compressed version to IPFS... I'd expect that way fetching it with an in-browser ad-hoc IPFS node could reduce to just a few seconds, at least once it's spread a bit... and maybe we could actually promote that and have people try doing that - in the end it's way cooler than any other means of hosting it :-) (although undoubtedly we will need other means in the midterm anyhow).
@ekpyron
Yeah - not sure if it's worth putting a huge amount of effort in it, because I'm not sure how many people would end up using it :-). But personally I'd like to support and promote IPFS - it's just way cooler than any other hosting options :-).
Is it ok to slightly change the naming scheme for new nightlies? I think that leading zeros would make dates look a bit more like dates:
v0.6.11-nightly.2020.7.3+commit.0ac039e4 vs v0.6.11-nightly.2020.07.03+commit.0ac039e4
As proof-of-concept I have brotli-compressed the 0.6.10 binary, pinned it to IPFS and temporarily put up https://test.ekpyron.org/ipfsTest.html for loading and decompressing it (using an in-browser IPFS node and in-browser brotli implementation).
The result for me is in the range of 5-10 seconds loading time. The first load was significantly slower (since the file had to distribute through IPFS). Not too bad, but could be better...
EDIT: ah interesting... apparently the browsers indeed do cache the wasm compilation - the "Executing" part was first around 3-4 seconds for me and is now instant.
I never had any problems with that local checkout
at some point we should shut it down forcefully, I would say - that's the only way to move people over. For now, not updating the list might actually be a solution, at least people start wondering. At least we should already make people aware now and fix our urls.
I think we should keep the schema backwards-compatible as much as possible. We also have to think about how to add binaries for other platforms.
compression on IPFS: I would not spend too much time on it - the long term goal is that people run their own ipfs node and then you would just download the compiler once anyway
the zeros in the version are dropped because that is demanded by the semver standard. Actually I'm wondering if the naming scheme should not rather reflect the string returned by solc --version
Note from gitter discussion this morning:
Once we tackle windows builds we should:
@chriseth Looks like having a mirroring script on AWS CodeBuild triggered by pushes to solc-bin would require giving AWS admin access to webhooks in solc-bin and for some reason also write access to the repo. Also, to all the private repos of anyone who adds the app. So that's no go.
The other choices are AWS Lambda and GH actions:
solc-bin repo.The discussion here is pretty long so I'm going to make it even longer by summarizing what's left to do here :)
solc-bin (comparing them with github release page and validating names) (#9930)update script in solc-bin:wasm/ automatically. DONE: https://github.com/ethereum/solc-bin/pull/49.solc-bin.ethereum.org domain (#9934).I think we should just create separate issues for these or this one will go on for ages:
solidity repo/releases (#9931).soljson nightliesAny updates on this? Pretty anxious about the huge devex improvements this will unlock
@fzeoli There were a few things in the last weeks that taken together pulled me away from this task for quite a while (the internal hackathon, my vacation, then post 0.7.0 bug squashing), but that's already over and I'm getting back to it when I'm done with the bug I'm currently working on.
Is devex blocked by anything specific? The main part of this task (i.e. changing the way binaries are hosted and the release process) is done already. I've been posting updates about that part in the more specific #9226 - older binaries for old platforms are already available in solc-bin repo and solc-bin is automatically mirrored to S3 and available at solc-bin.ethereum.org. The two most important remaining things are the pre-0.6.9 MacOS builds and the automated sanity check for release PRs.
Anything below "Stuff for later" in the post above won't be covered by this issue (I'll create separate ones because this one is already too big). Of these things IPFS in just a nice-to-have unless it turns out that there's actually some community interest in that and a completely static build for Windows and support for MSVC 2019 is also in the works by @christianparpart (#9594, #9476).
The most important thing is the pre-0.6.9 mac builds. This being missing is why tools don't use the native compiler, so once that's done we can push pretty much everyone to it
@fzeoli See https://github.com/ethereum/solidity/issues/9226#issuecomment-699933508. I finished preparing the MacOS builds and they're currently going through review.
I'm closing this since the core issue is solved (the release process has been changed and we have new hosting) and all the smaller related problems are either solved as well or have their own issues (see https://github.com/ethereum/solidity/issues/9258#issuecomment-663174907 above for details).
Most helpful comment
Let's have a call about this on Wednesday! For me, it's just about having full control and giving away as few permissions as possible.
@cameel if we pull the nightly binary from circelci, you can just search for the latest successful run on develop - no need to actually re-build anything.