I am trying to remotely monitor a JVM running in docker. The configuration looks like this:
machine 1: runs a JVM (in my case, running kafka) in docker on an ubuntu machine; the IP of this machine is 10.0.1.201; the application running in docker is at 172.17.0.85.
machine 2: runs JMX monitoring
Note that when I run JMX monitoring from machine 2, it fails with a version of the following error (note: the same error occurs when I run jconsole, jvisualvm, jmxtrans, and node-jmx/npm:jmx):
The stack trace upon failing looks something like the following for each of the JMX monitoring tools:
java.rmi.ConnectException: Connection refused to host: 172.17.0.85; nested exception is
java.net.ConnectException: Operation timed out
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
(followed by a large stack trace)
Now the interesting part is when I run the same tools (jconsole, jvisualvm, jmxtrans, and node-jmx/npm:jmx) on the same machine that is running docker (machine 1 from above) the JMX monitoring works properly.
I think this suggests that my JMX port is active and working properly, but that when I execute JMX monitoring remotely (from machine 2) it looks like the JMX tool does not recognize the internal docker IP (172.17.0.85)
Below are the relevant (I think) network configuration elements on machine 1 where JMX monitoring works (note the docker ip, 172.17.42.1):
docker0 Link encap:Ethernet HWaddr ...
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr:... Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6787941 errors:0 dropped:0 overruns:0 frame:0
TX packets:4875190 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1907319636 (1.9 GB) TX bytes:639691630 (639.6 MB)
wlan0 Link encap:Ethernet HWaddr ...
inet addr:10.0.1.201 Bcast:10.0.1.255 Mask:255.255.255.0
inet6 addr:... Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4054252 errors:0 dropped:66 overruns:0 frame:0
TX packets:2447230 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2421399498 (2.4 GB) TX bytes:1672522315 (1.6 GB)
And this is the relevant network configuration elements on the remote machine (machine 2) from which I am getting the JMX errors:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 ::1 prefixlen 128
inet 127.0.0.1 netmask 0xff000000
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=1<PERFORMNUD>
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether ....
inet6 ....%en1 prefixlen 64 scopeid 0x5
inet 10.0.1.203 netmask 0xffffff00 broadcast 10.0.1.255
nd6 options=1<PERFORMNUD>
media: autoselect
status: active
For completeness, the following solution worked. The JVM should be run with specific parameters established to enable remote docker JMX monitoring were as followed:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=<PORT>
-Dcom.sun.management.jmxremote.rmi.port=<PORT>
-Djava.rmi.server.hostname=<IP>
where:
<IP> is the IP address of the host that where you executed 'docker run'
<PORT> is the port that must be published from docker where the JVM's JMX port is configured (docker run --publish 7203:7203, for example where PORT is 7203). Both `port` and `rmi.port` can be the same.
Once this is done you should be able to execute JMX monitoring (jmxtrans, node-jmx, jconsole, etc) from either a local or remote machine.
Thanks to @Chris-Heald for making this a really quick and simple fix!
For dev environment you can set java.rmi.server.hostname
to the catch-all IP address 0.0.0.0
Example:
-Djava.rmi.server.hostname=0.0.0.0 \ -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.port=${JMX_PORT} \ -Dcom.sun.management.jmxremote.rmi.port=${JMX_PORT} \ -Dcom.sun.management.jmxremote.local.only=false \ -Dcom.sun.management.jmxremote.authenticate=false \ -Dcom.sun.management.jmxremote.ssl=false
I found it that trying to set up JMX over RMI is a pain, especially because of the -Djava.rmi.server.hostname=<IP>
which you have to specify on startup. We're running our docker images in Kubernetes where everything is dynamic.
I ended up using JMXMP instead of RMI, as this only need one TCP port open, and no hostname.
My current project uses Spring, which can be configured by adding this:
<bean id="serverConnector"
class="org.springframework.jmx.support.ConnectorServerFactoryBean"/>
(Outside Spring you need to set up your own JMXConncetorServer in order to make this work)
Along with this dependency (since JMXMP is an optional extension and not a part of the JDK):
<dependency>
<groupId>org.glassfish.main.external</groupId>
<artifactId>jmxremote_optional-repackaged</artifactId>
<version>4.1.1</version>
</dependency>
And you need to add the same jar your your classpath when starting JVisualVM in order to connect over JMXMP:
jvisualvm -cp "$JAVA_HOME/lib/tools.jar:<your_path>/jmxremote_optional-repackaged-4.1.1.jar"
Then connect with the following connection string:
service:jmx:jmxmp://<url:port>
(Default port is 9875)
After digging around for quite a lot, I found this configuration
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=1098
-Dcom.sun.management.jmxremote.rmi.port=1098
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote.local.only=false
The difference with the other above is that java.rmi.server.hostname
is set to localhost
instead of 0.0.0.0
来源:https://stackoverflow.com/questions/31257968/how-to-access-jmx-interface-in-docker-from-outside