While profiling homegrown web-application I came across following very strange (at least for me) observation.
Almost all time is spent in socketRead0()
meth
I am facing the same problem. My application has a very high qps and each request will make me send multiple thrift calls, which use this native api : socketRead0
So I decide to do an experiment. I make a mock server with an api sleep 30s before return and a client calls this api. My purpose is to test the thread status when the net io happening. Based on my thread dump, the thread status is RUNNABLE
.
This explains to two things :
application with high qps blocking io will face a high cpu load value
your java thread is still running in jvm since the thread state is RUNNABLE
which will contribute to high user space cpu utilization
both of these will make your cpu busy.
I noticed during the experiment, the system space cpu utilization is low. I think this something relates to the thread scheduling strategy difference between jvm and os. We know hotspot threading model is 1:1, meaning one jvm thread to one os thread. when a blocking io happened, such as socketRead0
the kernel thread will set to state S
and will not blocking cpu, but user space thread is blocking(waiting). when this happened, I think we need to rethink the fundamental I/O model in our application.
VisualVM shows load not as an absolute value but as a relative value, so it simply means that your application does not have any more CPU-consuming point.
I believe you should configure VisualVM to not drill that deep down, and rather count this method call as part of a method that is in your code (or spring's).
I have already experienced such a behaviour, but it didn't look like it was requiring any optimization. The Web Application simply has to read data from sockets (i.e. HTTP Request, database, internal network services...) and there is no helping it.