I\'ve read a few descriptions of different times printed in G1GC logs but couldn\'t really prove/understand when I produced them locally. For example, following log was produced
I am not sure if I should post this as an answer, because this is my understanding of this log, but it would be too big for a comment it seems.
The total time of the STW
event was 0.500ms
if you look with the eyes of G1GC
and was neither 0.500ms
nor 10ms
if you take Shenandoah
for example. When you use G1GC
, STW event
is treated as 0.500ms
, using Shenandoah
, will result in 0.500ms + delta
; where this delta
will be the cumulative time it took to bring all java threads
to a safepoint
(also called TTSP
- time to safe point) + whatever clean-up was needed for that safepoint
. May be a picture will make this easier:
|------|------------------------|---------|
| TTPS | G1 Evacuation Pause | CleanUp |
|------|------------------------|---------|
G1GC
treats as the STW Event
the G1 Evacuation Pause
region only. Shenandoah
for example, treats the entire thing as the STW
event (all 3 regions). Who is right? I will leave this up to you to decide.
You can enable the safepoint granularity for G1GC
via -Xlog:safepoint*
, for example.
The tools that you are using have their own "opinion" oh how to treat each time produced by the logs, I guess; but it is absolutely not 10 ms
. Why? As you have seen already (as you say in comments) there are times when you will get something like this in logs:
[9.090s][info][gc ] GC(25) Pause Young (Normal) (G1 Evacuation Pause) 77M->2M(128M) 0.500ms
[9.090s][info][gc,cpu ] GC(25) User=0.00s Sys=0.00s Real=**0.00s**
Notice the Real=0.00s
. Does this mean there was no pause? Of course not, it just means there was no cpu time spent.