Aggressive garbage collector strategy

后端 未结 4 2043
耶瑟儿~
耶瑟儿~ 2020-12-13 02:49

I am running an application that creates and forgets large amounts of objects, the amount of long existing objects does grow slowly, but this is very little compared to shor

相关标签:
4条回答
  • 2020-12-13 03:11

    You can try reducing the new size. This will case it to make more, smaller collections. However it can cause these short lived objects to be passed into tenured space. On the other hand you can try increasing the NewSize which means less objects will pass out of the young generation.

    My preference however is to create less garbage and the GC will behave in a more consistent manner. Instead of creating objects freely, try re-using them or recycling objects. You have to be careful this this doesn't cause more trouble than its worth, but you can reduce the amount of garbage created significantly in some applications. I suggest using a memory profiler e.g. YourKit to help you identify the biggest hitters.

    An extreme case is to create so little garbage it doesn't collect all day (even minor collections). It possible for a server side application (but may not be possible for a GUI application)

    0 讨论(0)
  • 2020-12-13 03:14

    The first VM options I'd try are increasing the NewSize and MaxNewSize and using one of the parallel GC algorithms (try UseConcMarkSweepGC which is designed to "keep garbage collection pauses short").

    To confirm that the pauses you're seeing are due to GC, turn on verbose GC logging (-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps). More info about how to read these logs is available online.

    To understand the bottleneck, run the app in a profiler. Take a heap snapshot. Then, let the app do its thing for a while. Take another heap snapshot. In order to see what's taking up all the space, look for whatever there are a lot more of after the second heap snapshot. Visual VM can do this, but also consider MAT.

    Alternatively, consider using -XX:+HeapDumpOnOutOfMemoryError so that you get a snapshot of the real problem, and you don't have to reproduce it in another environment. The heap that's saved can be analyzed with the same tools -- MAT etc..

    However, you may be getting an OutOfMemoryException either because you have a memory leak or because you're running with too small a max heap size. The verbose GC logging should help you answer both of these questions.

    0 讨论(0)
  • 2020-12-13 03:15

    The G1GC algorithm, which has been introduced with stable Java 1.7 is doing well. You have to just specify maximum pause time you want to live with in your application. JVM will take care of all other things for you.

    Key parameters:

    -XX:+UseG1GC -XX:MaxGCPauseMillis=1000 
    

    There are some more parameters to be configured. If you are using 4 GB RAM, configure region size as 4 GB/2048 blocks, which is roughly 2 MB

    -XX:G1HeapRegionSize=2  
    

    If you have 8 core CPU, fine tune two more parameters

    -XX:ParallelGCThreads=4 -XX:ConcGCThreads=2 
    

    Apart from these parameters, leave other parameters values to default like

    -XX:TargetSurvivorRatio etc.

    Have a look at oracle website for more details about G1GC.

    -XX:G1HeapRegionSize=n
    

    Sets the size of a G1 region. The value will be a power of two and can range from 1MB to 32MB. The goal is to have around 2048 regions based on the minimum Java heap size.

     -XX:MaxGCPauseMillis=200
    

    Sets a target value for desired maximum pause time. The default value is 200 milliseconds. The specified value does not adapt to your heap size.

    -XX:ParallelGCThreads=n
    

    Sets the value of the STW worker threads. Sets the value of n to the number of logical processors. The value of n is the same as the number of logical processors up to a value of 8.

    If there are more than eight logical processors, sets the value of n to approximately 5/8 of the logical processors. This works in most cases except for larger SPARC systems where the value of n can be approximately 5/16 of the logical processors.

    -XX:ConcGCThreads=n
    

    Recommendations from oracle:

    When you evaluate and fine-tune G1 GC, keep the following recommendations in mind:

    1. Young Generation Size: Avoid explicitly setting young generation size with the -Xmn option or any or other related option such as -XX:NewRatio. Fixing the size of the young generation overrides the target pause-time goal.

    2. Pause Time Goals: When you evaluate or tune any garbage collection, there is always a latency versus throughput trade-off. The G1 GC is an incremental garbage collector with uniform pauses, but also more overhead on the application threads. The throughput goal for the G1 GC is 90 percent application time and 10 percent garbage collection time.

    Recently I have replaced CMS with G1GC algorithm for 4 GB heap with almost equal division of young gen & old gen. I set the MaxGCPause Time and results are awesome.

    0 讨论(0)
  • 2020-12-13 03:20

    You don't mention which build of the JVM you're running, this is crucial info. You also don't mention how long the app tends to run for (e.g. is it for the length of a working day? a week? less?)

    A few other points

    1. If you are continually leaking objects into tenured because you're allocating at a rate faster than your young gen can be swept then your generations are incorrectly sized. You will need to do some proper analysis of the behaviour of your app to be able to size them correctly, you can use visualgc for this.
    2. the throughput collector is designed to accept a single, large pause as opposed to many smaller pauses, the benefit is it is a compacting collector and it enables higher total throughput
    3. CMS exists to serve the other end of the spectrum, i.e. many more much much smaller pauses but lower total throughput. The downside is it is not compacting so fragmentation can be a problem. The fragmentation issue was improved in 6u26 so if you're not on that build then it may be upgrade time. Note that the "bleeding into tenured" effect you have remarked on exacerbates the fragmentation issue and, given time, this will lead to promotion failures (aka unscheduled full gc and associates STW pause). I have previously written an answer about this on this question
      1. If you're running a 64bit JVM with >4GB RAM and a recent enough JVM, make sure you -XX:+UseCompressedOops otherwise you're simply wasting space as a 64bit JVM occupies ~1.5x the space of a 32bit JVM for the same workload without it (and if you're not, upgrade to get access to more RAM)

    You may also want to read another answer I've written on this subject which goes into sizing your survivor spaces & eden appropriately. Basically what you want to achieve is;

    • eden big enough that it is not collected too often
    • survivor spaces sized to match the tenuring threshold
    • a tenuring threshold set to ensure, as much as possible, that only truly long lived objects make it into tenured

    Therefore say you had a 6G heap, you might do something like 5G eden + 16M survivor spaces + a tenuring threshold of 1.

    The basic process is

    1. allocate into eden
    2. eden fills up
    3. live objects swept into the to survivor space
    4. live objects in from survivor space either copied to the to space or promoted to tenured (depending on tenuring threshold & space available & no of times they've been copied from 1 to the other)
    5. anything left in eden is swept away

    Therefore, given spaces appropriately sized for your application's allocation profile, it's perfectly possible to configure the system such that it handles the load nicely. A few caveats to this;

    1. you need some long running tests to do this properly (e.g. can take days to hit the CMS fragmentation problem)
    2. you need to do each test a few times to get good results
    3. you need to change 1 thing at a time in the GC config
    4. you need to be able to present a reasonably repeatable workload to the app otherwise it will be difficult to objectively compare results from different test runs
    5. this will get really hard to do reliably if the workload is unpredictable and has massive peaks/troughs

    Points 1-3 mean this can take ages to get right. On the other hand you may be able to make it good enough v quickly, it depends how anal you are!

    Finally, echoing Peter Lawrey's point, you can save a lot of bother (albeit introducing some other bother) if you are really rigorous about object allocation.

    0 讨论(0)
提交回复
热议问题