There is usually a lag-time of a few days between when the PHP team cuts the release, and when these images are updated. Is there any way to reduce that lag-time? Perhaps a bot that watches for new tags in the PHP repo, then triggers a rebuild?
Or is there another required-to-be-manual step that I'm unaware of, which would prevent this from being achievable?
It's usually the time it takes to actually build PHP which causes most of
the delay. We support 4 separate versions of PHP simultaneously, and for
each of those anywhere from 7 to 10 different variants, so if there's a new
release of all four versions simultaneously (as is often the case), then we
have to build PHP from source roughly 31 times (and that's not even
counting the now-multiarch builds the official images do).
We have an automated bot which performs the update commits, and it does a
build test before pushing the commits. Then we make a PR over to
https://github.com/docker-library/official-images (which is the official
images source-of-truth), where another build test is performed. After that
merges, everything is built one final time (per architecture this time!) on
the official images build servers.
@tianon said:
It's usually the time it takes to actually build PHP which causes most of the delay. We support 4 separate versions of PHP simultaneously, and for each of those anywhere from 7 to 10 different variants, so if there's a new release of all four versions simultaneously (as is often the case), then we have to build PHP from source roughly 31 times (and that's not even counting the now-multiarch builds the official images do).
Is this done manually? It _sounds_ like this could be done with something like Travis-CI or Circle-CI with pass/fail statuses. But again, this may be my own naïve understanding of the scope of the work.
We have an automated bot which performs the update commits, and it does a build test before pushing the commits.
Makes sense.
Then we make a PR over to https://github.com/docker-library/official-images (which is the official images source-of-truth), where another build test is performed.
Is _this_ a step that could be automated as part of a pipeline? I would expect it could be via the GitHub API.
After that merges, everything is built one final time (per architecture this time!) on the official images build servers.
Makes sense again.
To be clear, I'm not complaining — and I hope I don't come off that way. I'm just attempting to understand the requirements of the pipeline to see if there's room for more optimization.
I know that 7.2.2 was tagged in GitHub two nights ago, and I like to test the latest patches sooner rather than later. A few days isn't the end of the world, but when I was digging-in to this project, I didn't see any explanations around the process or thinking behind how these images are built that could answer my question. :)
Does this project need help with automation? I use these images for my production apps, so I could happily contribute some level of effort toward making them more efficient.
Let me run through the steps currently, and let you know which is automated. We wouldn't turn down help. :heart:
update.sh, if there are changes, it makes a commit, builds and tests the images, and then pushes the commitgenerate-stackbrew-library.sh and put the output in the correct library/ file, then we make a PR with the changesdocker build context changes (output of diff-pr.sh)library/ files that are changed by the PR. It is not granular enough to know which specific image tags have changed.
Most helpful comment
Let me run through the steps currently, and let you know which is automated. We wouldn't turn down help. :heart:
update.sh, if there are changes, it makes a commit, builds and tests the images, and then pushes the commitgenerate-stackbrew-library.shand put the output in the correctlibrary/file, then we make a PR with the changesdocker buildcontext changes (output of diff-pr.sh)library/files that are changed by the PR. It is not granular enough to know which specific image tags have changed.