I use log4j2 2.10.0 and have the following code:
SingleChronicleQueue q = SingleChronicleQueueBuilder.binary(args[0]).blockSize(536870912).build();
ExcerptAppender a = q.acquireAppender();
char[] chars = "abcdefghijklmnopqrstuvwxyz".toCharArray();
StringBuilder sb = new StringBuilder();
Random random = new Random();
for (int i = 0; i < 1000; i++) {
char c = chars[random.nextInt(chars.length)];
sb.append(c);
}
String t = sb.toString();
for (int i = 0; i < 1000000; i ++ ) {
m_logger.info(i + " " + t);
a.writeText(t);
}
Both the cq4 and the log is writing to the same dir.
And in the log, it was blasting out fine until I can see
12:40:00.853 - [main] INFO c.c.c.a.r.SandboxApp 601049
12:40:00.853 - [main] INFO c.c.c.a.r.SandboxApp 601050
12:40:00.853 - [main] INFO c.c.c.a.r.SandboxApp 601051
12:40:06.156 - [main] INFO c.c.c.a.r.SandboxApp 601052
There's some sort of IO operation that made it delayed 6 seconds.
I don't know enough about disk, mount, etc. This would disappear if I comment out the writeText but I don't know if it's a chronicle problem or log4j2.
My log4j2 parameter is
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -DAsyncLogger.RingBufferSize=65536*65536 -DAsyncLogger.WaitStrategy=Sleep -Dlog4j2.AsyncQueueFullPolicy=Discard -Dlog4j2.DiscardThreshold=INFO
Here's what the profiler is showing
Thanks!!
Once you start to work with memory-mapped files, you leave the realm of Java and enter the world of operating systems (and hardware).
So, question 1: what operating system (and what version) are you using?
Question 2: how much physical memory do you have on your machine?
Question 3: what hard disk are you using? SSD or spinning disk?
In 2017, consumer grade spinning disks exceeding 200 MB/sec write speed are pretty fast. By comparison, the fastest SSD in this 2017 comparison writes 1900 MB/sec.
You did just try to write about 2GB to a memory mapped file. That needs to be sync-ed with the physical disk at some point. That syncing could easily take 6 seconds...
The “joy” with memory mapped files is that you have little control over when this syncing happens. People end up splitting into smaller files to ensure these pauses don't grow too large. There is some aspect of black magic in tuning the performance of this kind of applications. There is a lot to learn in this field. Fun times ahead! Enjoy!
By the way, you cannot configure
-DAsyncLogger.RingBufferSize=65536*65536 # not a number
Log4j2 will try to parse 65536*65536
as a number, which will fail, and it will fall back to use the default buffer size instead. You need to specify the desired ring buffer size in bytes.
来源:https://stackoverflow.com/questions/47971267/chronicle-queue-log4j2-async-logger