I am doing parallel steps as -
stages {
stage (\'Parallel build LEVEL 1 - A,B,C ...\') {
steps{
parallel (
\"Build
To make B execute after C:
parallel (
"Build A": {
node('Build_Server_Stack') {
buildAndArchive(A) // my code
}
},
"Build C then B" : {
node('Build_Server_Stack') {
buildAndArchive(C)
buildAndArchive(B)
}
}
)
...which isn't very interesting.
A more interesting case is when you have 4 jobs, A, B, C & D, with C depending on A only, and D depending on both A and B like so:
A B
| \ |
C D
What makes this interesting is that you can't express this in a Jenkins pipeline directly. No matter how you arrange the jobs in different parallel blocks, you'll always force one job to wait on another unnecessarily. You might reasonably arrange your jobs as:
[A, B]
[C, D]
But then C will need to wait for B even if A completes quickly.
Alternatively:
[A]
[C, B+D]
But now D has to wait for A & B in series.
Presumably most people will have enough information to be able to choose a configuration that's "good enough", but it's unfortunate that Jenkins doesn't appear to have a general solution for this. It's not like this is a new idea.
To work around this, I run all my parallel threads simultaneously, then make each of them wait for their dependencies in turn. Something like CountDownLatch would be the perfect solution to implement waiting, but this doesn't play nicely with Jenkins. The waitUntil Jenkins step seems ideal, but as it's based on polling there's inevitably a delay between a job finishing and waitUntil
noticing. lock on the other hand behaves like a mutex. By combining the two we can get the behaviour we need of one job starting almost immediately after its dependencies complete.
// Map job name to the names of jobs it depends on.
jobDependencies = [
"A": [],
"B": [],
"C": ["A"],
"D": ["A", "B"]
]
lockTaken = [:]
threads = [:]
jobDependencies.each { name, dependencies ->
threads[name] = {
// Use a lock with a name unique to this build.
lock("${name}-${BUILD_TAG}") {
// Notify other threads that the lock has been taken and it's safe to wait on it.
lockTaken[name] = true
dependencies.each { dependency ->
// Poll until the dependency takes its lock.
waitUntil {
lockTaken[dependency]
}
// Wait for the dependency to finish and release its lock.
lock("${dependency}-${BUILD_TAG}") {}
}
// Actually run the job
buildAndArchive(name)
}
}
}
parallel threads
This works well enough although it feels like there must be a better solution out there... I'm hoping that by providing this answer someone will notice and either a) tell me I'm wrong and point out the right answer; or b) make a plugin to do it properly ;)