As part of some CI/CD steps it's necessary to run a one-off job from inside the cluster. An example from my pipeline would be running a Django database migration before upgrading the code in staging/production.
There are a few ways of achieving this that I've come across;
kubectl runI've tried these in Kustomize, and getting k8s Jobs to play nice is a little challenging.
These Jobs need to consume Secrets/Configmaps that are created and hashed by Kustomize, so I think they need to be generated by Kustomize too.
Since a Job name must be unique, unless I have some way of transforming the Job name into something unique, I can't re-run a job inside the same namespace (e.g. I want to apply new code in staging => re-run migration job). A possible workaround here would be to always try to do a kubectl delete job myjob before applying the new kustomize build output, but then I lose the old job history.
In some cases (per-branch / per-commit / per-tag jobs) It might be enough to do name hashing based on the contents of the Job. I could easily imagine wanting to run a job on every apply though, i.e. requiring fully randomized Job names.
I don't see any issues/discussion on this subject; what's the current best-practice on this subject?
@paultiplady I'm trying to understand your problem. You wan to run some CI jobs and you want every job to have a different name. You can try this. You can put Secrets/Configmaps and your job into the same kustomization. Then for every job you need to run, you can add a different namePrefix, maybe the commit hash. You can do this by kustomize edit set nameprefix <commit hash>. Then kustomize build . | kubectl apply -f -. Then the jobs will have different names.
Here is the context for myself and possibly others:
I have a CI/CD pipeline that requires database migrations to run before a deployment begins. The database migration needs to exit with a non-zero status before the deployment can start its rollout.
I am currently using Kubernetes Jobs to run the migration beforehand, checking the result and then setting a deployment's image if it succeeds.
I understand coupling migrations to deployments is bad practice but in general, there are many times where I need to couple a kubernetes Job to a deployment rollout. It's not useful to do an initContainer because an initContainer is per pod. I need an initJob that can run before a deployment rollout.
Obviously this scope is quite large but it would be nice to use kustomize to generate a job per deployment similar to what is being done for configMaps.
@paultiplady @lswith There is a proposal to handle your database migration before rolling out a deployment https://github.com/kubernetes/community/pull/1171. Before these hooks are available, continuing using Kubernetes Job is a good approach.
Now how could Kustomize help with this? I don't think kustomize can handle the whole thing all by itself. Coupling Kustomize with some scripts could definitely help. For example, you can try separate your configs into two kustomization. One is for the Job running the database migration. The other is for the deployment itself. For any ConfigMaps and Secrets that are shared by the Job and deployment, put them into a common base. Then the script need to kubectl apply the Job kustomization and watch the status. Once it succeeds, the script need to kubectl apply the other kustomization.
@paultiplady @lswith Have you got a chance to look at and try the approach of two kustomizations. Any suggestions or comments? If it looks good, we will document this approach.
I am currently using the approach of a bash script which handles running a job before generating the deployment. I've seperated it into 2 kustomizations.
I would suggest to try running these one off jobs in bare pods without any controller. I haven鈥檛 tried it but seems like it may be a solution.
You have even more issues if you need to run multiple jobs in succession and that if a migration step fails, kubernetes will just keep trying to rerun it, it seems the solution to each issue is to just create a deployment with a replica count of 1 and when its done successfully, deploy. Is there an effective alternative?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@Liujingfang1: I think the simplest thing here would be to be able to have first class support to set a jobSuffix and be able to edit it with kustomize edit set
Right now, my release script (that is, script that creates a release not the deployment) does this:
cd $PROJECT_ROOT/containers/release/base
kustomize edit set image joy-client=$CLIENT_IMAGE:$VERSION_TAG
kustomize edit set image joy-server=$SERVER_IMAGE:$VERSION_TAG
Base has everything of mine, including the migration job. If we had a simple command to kustomize jobSuffix or something of the like, then I could just also call, right below:
kustomize edit set jobSuffix migrations=$VERSION_TAG
Or something like that. Thus updating my kustomization.yaml nicely and letting my deploy script stay as simple as:
kustomize build containers/release/overlays/production | kubectl apply -f -
No waiting, or anything else.
I'm surprised k8s / kustomize don't seem to have a good way to accomplish something like a one-off db migration. Seems that could have been handled pretty well with a docker-compose run.
workaround: https://github.com/kubernetes-sigs/kustomize/issues/903#issuecomment-590593588
The workaround suggested in the comment above is to delete the old job, and to then re-run the job. That works, but ideally you want to simply create a new job, with a name like job-name-[timestamp] or job-name-[iteration]. I.e., kustomize could help to provide simple templating support, so that a unique job id is generated for each invocation.
Given that Kustomize markets itself as "template-free configuration", I suppose this would be considered out of scope.
Most helpful comment
Here is the context for myself and possibly others:
I have a CI/CD pipeline that requires database migrations to run before a deployment begins. The database migration needs to exit with a non-zero status before the deployment can start its rollout.
I am currently using Kubernetes Jobs to run the migration beforehand, checking the result and then setting a deployment's image if it succeeds.
I understand coupling migrations to deployments is bad practice but in general, there are many times where I need to couple a kubernetes Job to a deployment rollout. It's not useful to do an initContainer because an initContainer is per pod. I need an initJob that can run before a deployment rollout.
Obviously this scope is quite large but it would be nice to use kustomize to generate a job per deployment similar to what is being done for configMaps.