After update of our Jenkins master installation to its latest LTS version 2.46.3 one of its slaves (Windows 7 machine, 32-bit) cannot connect with the master.
The er
Is the Jenkins master instance running behind a load balancer? I had the same issue when my instance was running behind an Application Load Balancer in AWS.
If so, then the acknowledgement sequence could get modified because of differing protocols in the Load balancer. JNLP requires TCP connection on port 50000 by default.
If your setup is on AWS, you could try creating a private hosted zone in Route53 with an Alias record for your Jenkins instance's private IP address.
For e.g: jenkins.example.com -> your Jenkins instance's private IP
Then, in Jenkins UI -> Manage Jenkins -> Configure System -> Manage nodes and clouds -> Configure clouds -> (under advanced settings)
Tunnel connection through : jenkins.example.com:50000
This avoids your slave agents to have to go through the load balancer to connect to the Jenkins Master.
You need to check the secret key of the node is intact. If not proper, you have to download the slave.jar and also Run agent from command line with new jar file.
java -jar slave.jar -jnlpUrl http://<ipaddress>:8080/computer/<computername>/slave-agent.jnlp -secret 340d54sdrgtjj334kelkahsdjkf83f1c5120dc2fb74939fcdb7f05e1926049f8d7991
Also to check the java version installed is > 7
The message:
Incorrect acknowledgement sequence ...
happened for me when I had incorrectly configured a value for the Java property hudson.TcpSlaveAgentListener.port
as the same port number as the HTTP port used by Jenkins. The TcpSlaveAgentListener javadoc indicates that is a misconfiguration when it says:
Aside from the HTTP endpoint, Jenkins runs TcpSlaveAgentListener that listens on a TCP socket. Historically this was used for inbound connection from agents (hence the name), but over time it was extended and made generic, so that multiple protocols of different purposes can co-exist on the same socket. (emphasis added)
If the HTTP port was 8080 and the hudson.TcpSlaveAgentListener.port
was also 8080, then my JNLP agents failed to connect. As soon as I assigned another value to hudson.TcpSlaveAgentListener.port
(like 50000) and restarted Jenkins, my JNLP agents were able to connect.
The stack trace on the failing JNLP agent was:
INFO: Trying protocol: JNLP4-connect
Mar 02, 2019 3:49:29 PM org.jenkinsci.remoting.protocol.impl.AckFilterLayer abort
WARNING: [JNLP4-connect connection to agent.example.com/172.16.16.113:8080] Incorrect acknowledgement sequence, expected 0x000341434b got 0x485454502f
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:614)
at hudson.remoting.Engine.run(Engine.java:474)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Connection closed before acknowledgement sent
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecvClosed(AckFilterLayer.java:280)
at org.jenkinsci.remoting.protocol.FilterLayer.abort(FilterLayer.java:164)
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.abort(AckFilterLayer.java:130)
at org.jenkinsci.remoting.protocol.impl.AckFilterLayer.onRecv(AckFilterLayer.java:258)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:668)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$2200(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:283)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Unknown Source)
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to testing-a.markwaite.net:8080
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP4-plaintext not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP3-connect not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP2-connect not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Server reports protocol JNLP-connect not supported, skipping
Mar 02, 2019 3:49:29 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:682)
at hudson.remoting.Engine.innerRun(Engine.java:639)
at hudson.remoting.Engine.run(Engine.java:474)
We recently hit this issue with our AWS-based Jenkins using JNLP for remote integration testing. The remote slave would call back to the Jenkins master, which failed with a similar error. The issue ended up being a dynamically generated AWS ELB of type HTTP (because the Kubernetes ELB provisioner presently doesn't support multi-protocol ELBs) for the Jenkins Master. We had to manually change the JNLP ingress port type of the ELB to TCP
, while the web interface ingress 'instance port' was protocol HTTP
and 'load balancer' was protocol HTTPS
.
This happened to us when a Windows Update or some other silent background update messed with the slave's environment variables. HTTPS_PROXY
and HTTP_PROXY
had to be re-added, and once that was done we were back in business.