I have database migrations which I\'d like to run before deploying a new version of my app into a Kubernetes cluster. I want these migrations to be run automatically as part of
blocking while waiting on the result of a queued-up job seems to require hand-rolled scripts
This isn't necessary anymore thanks to the kubectl wait
command.
Here's how I'm running db migrations in CI:
kubectl apply -f migration-job.yml
kubectl wait --for=condition=complete --timeout=60s job/migration
kubectl delete job/migration
In case of failure or timeout, one of the two first CLI commands returns with an erroneous exit code which then forces the rest of the CI pipeline to terminate.
migration-job.yml
describes a kubernetes Job
resource configured with restartPolicy: Never
and a reasonably low activeDeadlineSeconds
.
You could also use the spec.ttlSecondsAfterFinished
attribute instead of manually running kubectl delete
but that's still in alpha at the time of writing and not supported by Google Kubernetes Engine at least.