I want to run same shell command (very simple shell commands like ls
) on all the UNIX slaves
which are connected to the master by using the master\'s script co
import hudson.util.RemotingDiagnostics;
print_ip = 'println InetAddress.localHost.hostAddress';
print_hostname = 'println InetAddress.localHost.canonicalHostName';
// here it is - the shell command, uname as example
uname = 'def proc = "uname -a".execute(); proc.waitFor(); println proc.in.text';
for (slave in hudson.model.Hudson.instance.slaves) {
println slave.name;
println RemotingDiagnostics.executeGroovy(print_ip, slave.getChannel());
println RemotingDiagnostics.executeGroovy(print_hostname, slave.getChannel());
println RemotingDiagnostics.executeGroovy(uname, slave.getChannel());
}
The pipeline looks something like this:
stages {
stage('Checkout repo') {
steps {
//checkout what I need
}
}
stage('Generate Jobs') {
steps {
jobDsl targets:'generate_projects.groovy',
}
}
stage('Build Projects') {
steps {
build job: "build-all",
propagate: true,
wait: true
}
}
}
and then is file generate_projects.groovy, where the actually DSL generation is:
for (agent in hudson.model.Hudson.instance.slaves) {
if (!agent.getComputer().isOffline()) { // check that agent is not offline
node = jenkins.model.Jenkins.instance.getNode(agent.name) // get agent name
agentIPs = node.computer.getChannel().call(new ListPossibleNames())
agentIP = agentIPs[0] // get agent IP
//Create a job that will run on that specific agent
jobName = FOLDER + '/<Job_name>' + agent.name // need to create different names
job(jobName)
{
label(agent.name)
steps
{
shell(<shell script or commands that you want to run>)
}
}
}
}
Beside the above generation of the jobs, you'll need to keep a list of jobs that were generated and add all its elements in "build-all" pipeline job, that will look something like:
parallel(
b0: {build '<Job_name>' + agent.name'},
b1: {build '<Job_name>' + agent.name'},
b2: {build '<Job_name>' + agent.name'},
.....
)
failFast: false
So when you run the pipeline, a job for each agent will be created, and all new created jobs will run in parallel. I use it for updating setup scenario.
Pretty old thread.
I managed the same situation in next way. I have a pipeline jobs that is doing next stages: - first it checks online agents (since they are physical machine it may happen to be down) using something like: for "(slave in hudson.model.Hudson.instance.slaves)..." - next stage is to create jobs for each found agent using DSL plugin and list_of_agents.each. Besides jobs for every online agent, it's created a job that will run all of them in parallel. Of course, the new created jobs contains the commands that I want to run on agents. When I run the pipeline, on all agents will run the same script/commands and output is returned to master pipeline job.
Until the end, I don't use * to search the agents, but instead I'm reading and parsing their names. For example, if I want to run a job on every agent that has LINUX in name, I will do next:
for (aSlave in hudson.model.Hudson.instance.slaves)
{
/* take into account just agents with LINUX in name*/
AGENT_NAME = aSlave.name
if ( AGENT_NAME.contains('LINUX') )
{
/* you can check also if the agent is online or other attributes */
/* Add agent name as label of the agent */
AGENT_LABELS = aSlave.getLabelString() + " " + AGENT_NAME
aSlave.setLabelString(AGENT_LABELS)
/* For each found agent, create a job that will run on it */
job('My_job_name_' + AGENT_NAME)
{
label(AGENT_NAME)
steps {
/* Do whatever you want here.
This job will run just on this specific agent (due to label set) */
}
}
} /* end if */
} /* end for */
/* If you want to run all jobs in parallel (every job on a specific agent), you can save all found agents in a list and then create one more pipeline job that will contain next line :
' parallel {
b0: {build 'My_job_name_' + AGENT_NAME_LIST[0]},
b1: {build 'My_job_name_' + AGENT_NAME_LIST[1]},
....
}
fastfail: false '