I\'m using a Jenkins declarative pipeline with Docker Agents to build and test my software, including running integration tests using testcontainers. I can run my testcontai
After some experimentation, I've discovered the cause of the problem. The crucial action is trying to create a Docker bridge network (using docker network create
, or a testcontainers Network
object) inside a Docker container that is itself running in a Docker bridge network. If you do this you will not get an error message from Docker, nor will the Docker daemon log file include any useful messages. But attempts to use the network will result in there being "no route to host".
I fixed the problem by giving my outermost Docker containers (the Jenkins Agents) access to the host network, by having Jenkins provide a --network="host"
option to its docker run
command:
pipeline {
agent {
dockerfile {
filename 'Dockerfile.jenkinsAgent'
additionalBuildArgs ...
args '-v /var/run/docker.sock:/var/run/docker.sock ... --network="host" -u jenkins:docker'
}
}
stages {
...
That is OK because the Jenkins Agents do not need the level of isolation given by a bridge network.
In my case it was enough to add two arguments to Docker agent options:
--group-add
parameter with ID of docker grouppipeline {
agent any
stages {
stage('Gradle build') {
agent {
docker {
reuseNode true
image 'openjdk:11.0-jdk-slim'
args '-v /var/run/docker.sock:/var/run/docker.sock --group-add 992'
}
}
steps {
sh 'env | sort'
sh './gradlew build --no-daemon --stacktrace'
}
}
} // stages
} // pipeline