Php: Less lag-time between release of PHP and rebuild of these images?

Created on 31 Jan 2018  Â·  3Comments  Â·  Source: docker-library/php

There is usually a lag-time of a few days between when the PHP team cuts the release, and when these images are updated. Is there any way to reduce that lag-time? Perhaps a bot that watches for new tags in the PHP repo, then triggers a rebuild?

Or is there another required-to-be-manual step that I'm unaware of, which would prevent this from being achievable?

Request

Most helpful comment

Let me run through the steps currently, and let you know which is automated. We wouldn't turn down help. :heart:

  • find and commit updates from upstream
  • make PR to docker-library/official-images

    • not automated

    • @tianon and myself have all the repos in docker-library (and some others) checked out locally and run a script over them to run each of their generate-stackbrew-library.sh and put the output in the correct library/ file, then we make a PR with the changes

  • PR is is given a diff of docker build context changes (output of diff-pr.sh)

    • not automated

    • currently run by @tianon or myself on each PR

  • PR is given a build test (output of test-pr.sh)

    • not automated

    • as far as I have seen this would very much overwhelm free services like TravisCI, many builds take upwards of several hours

    • run by @tianon or myself, usually on my machine to maximize build cache



      • since my machine builds every official image it can use Docker build cache and our "bashbrew/cache" tags (see code) so it does not have to build each image every time if there are no changes between images and the PR


      • it only build images from library/ files that are changed by the PR. It is not granular enough to know which specific image tags have changed.



  • after merge, images and built and pushed to Docker Hub on all supported architectures

All 3 comments

It's usually the time it takes to actually build PHP which causes most of
the delay. We support 4 separate versions of PHP simultaneously, and for
each of those anywhere from 7 to 10 different variants, so if there's a new
release of all four versions simultaneously (as is often the case), then we
have to build PHP from source roughly 31 times (and that's not even
counting the now-multiarch builds the official images do).

We have an automated bot which performs the update commits, and it does a
build test before pushing the commits. Then we make a PR over to
https://github.com/docker-library/official-images (which is the official
images source-of-truth), where another build test is performed. After that
merges, everything is built one final time (per architecture this time!) on
the official images build servers.

@tianon said:

It's usually the time it takes to actually build PHP which causes most of the delay. We support 4 separate versions of PHP simultaneously, and for each of those anywhere from 7 to 10 different variants, so if there's a new release of all four versions simultaneously (as is often the case), then we have to build PHP from source roughly 31 times (and that's not even counting the now-multiarch builds the official images do).

Is this done manually? It _sounds_ like this could be done with something like Travis-CI or Circle-CI with pass/fail statuses. But again, this may be my own naïve understanding of the scope of the work.

We have an automated bot which performs the update commits, and it does a build test before pushing the commits.

Makes sense.

Then we make a PR over to https://github.com/docker-library/official-images (which is the official images source-of-truth), where another build test is performed.

Is _this_ a step that could be automated as part of a pipeline? I would expect it could be via the GitHub API.

After that merges, everything is built one final time (per architecture this time!) on the official images build servers.

Makes sense again.


To be clear, I'm not complaining — and I hope I don't come off that way. I'm just attempting to understand the requirements of the pipeline to see if there's room for more optimization.

I know that 7.2.2 was tagged in GitHub two nights ago, and I like to test the latest patches sooner rather than later. A few days isn't the end of the world, but when I was digging-in to this project, I didn't see any explanations around the process or thinking behind how these images are built that could answer my question. :)

Does this project need help with automation? I use these images for my production apps, so I could happily contribute some level of effort toward making them more efficient.

Let me run through the steps currently, and let you know which is automated. We wouldn't turn down help. :heart:

  • find and commit updates from upstream
  • make PR to docker-library/official-images

    • not automated

    • @tianon and myself have all the repos in docker-library (and some others) checked out locally and run a script over them to run each of their generate-stackbrew-library.sh and put the output in the correct library/ file, then we make a PR with the changes

  • PR is is given a diff of docker build context changes (output of diff-pr.sh)

    • not automated

    • currently run by @tianon or myself on each PR

  • PR is given a build test (output of test-pr.sh)

    • not automated

    • as far as I have seen this would very much overwhelm free services like TravisCI, many builds take upwards of several hours

    • run by @tianon or myself, usually on my machine to maximize build cache



      • since my machine builds every official image it can use Docker build cache and our "bashbrew/cache" tags (see code) so it does not have to build each image every time if there are no changes between images and the PR


      • it only build images from library/ files that are changed by the PR. It is not granular enough to know which specific image tags have changed.



  • after merge, images and built and pushed to Docker Hub on all supported architectures
Was this page helpful?
0 / 5 - 0 ratings