High CPU, possibly due to context switching?

后端 未结 6 1507
自闭症患者
自闭症患者 2021-02-09 06:07

One of our servers is experiencing a very high CPU load with our application. We\'ve looked at various stats and are having issues finding the source of the problem.

One

6条回答
  •  被撕碎了的回忆
    2021-02-09 06:58

    So - can we rule out context switching or too-many-threads as the problem?

    I think you concerns over thrashing are warranted. A thread pool with 3000 threads (700+ concurrent operations) on a 2 CPU VMware instance certainly seems like a problem that may be causing context switching overload and performance problems. Limiting the number of threads could give you a performance boost although determining the right number is going to be difficult and probably will use a lot of trial and error.

    we need some proof of an issue.

    I'm not sure the best way to answer but here are some ideas:

    • Watch the load average of the VM OS and the JVM. If you are seeing high load values (20+) then this is an indicator that there are too many things in the run queues.
    • Is there no way to simulate the load in a test environment so you can play with the thread pool numbers? If you run simulated load in a test environment with pool size of X and then run with X/2, you should be able to determine optimal values.
    • Can you compare high load times of day with lower load times of day? Can you graph number of responses to latency during these times to see if you can see a tipping point in terms of thrashing?
    • If you can simulate load then make sure you aren't just testing under the "drink from the fire hose" methodology. You need simulated load that you can dial up and down. Start at 10% and slowing increase simulated load while watching throughput and latency. You should be able to see the tipping points by watching for throughput flattening or otherwise deflecting.

提交回复
热议问题