jmh

BenchMark工具-JMH介绍

删除回忆录丶 提交于 2019-12-04 23:27:35
JMH是有OpenJDK开发的基准测试(Benchmark)工具。JMH可以为写基准测试和运行测试提供很好的支持。JMH在C oncurrent Benchmarks也提供很好的支持,可以说是多功能测试工具。JMH在2013被公布出来,现在最新版本到1.9。 JMH基本用法: JMH环境构建 1 使用maven工程,直接在项目中引入相应的jar包 <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-core</artifactId> <version>${jmh.version}</version> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-generator-annprocess</artifactId> <version>${jmh.version}</version> <scope>provided</scope> </dependency> 2 或者直接用JMH的 archetype创建: mvn archetype:generate \ -DinteractiveMode=false \ -DarchetypeGroupId=org.openjdk.jmh \

Why is returning a Java object reference so much slower than returning a primitive

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 07:35:20
问题 We are working on a latency sensitive application and have been microbenchmarking all kinds of methods (using jmh). After microbenchmarking a lookup method and being satisfied with the results, I implemented the final version, only to find that the final version was 3 times slower than what I had just benchmarked. The culprit was that the implemented method was returning an enum object instead of an int . Here is a simplified version of the benchmark code: @OutputTimeUnit(TimeUnit

What exactly is number of operations in JMH?

不羁的心 提交于 2019-12-04 04:36:13
The JavaDoc of annotation @OperationsPerInvocation in the Java Microbenchmarking Harness (JMH) states: value public abstract int value Returns: Number of operations per single Benchmark call. Default: 1 Being new to JMH I am wondering what type of operation (byte code operation, assembly code operation, Java operation etc) is meant here. This question naturally refers to all places in JMH (documentation, output, comments etc) where the term 'operation' is used (e.g. " operation/time " unit or "time unit/operation "). In JMH, "operation" is an abstract unit of work. See e.g. the sample result:

Consuming stack traces noticeably slower in Java 11 than Java 8

拜拜、爱过 提交于 2019-12-03 18:57:42
问题 I was comparing the performance of JDK 8 and 11 using jmh 1.21 when I ran across some surprising numbers: Java version: 1.8.0_192, vendor: Oracle Corporation Benchmark Mode Cnt Score Error Units MyBenchmark.throwAndConsumeStacktrace avgt 25 21525.584 ± 58.957 ns/op Java version: 9.0.4, vendor: Oracle Corporation Benchmark Mode Cnt Score Error Units MyBenchmark.throwAndConsumeStacktrace avgt 25 28243.899 ± 498.173 ns/op Java version: 10.0.2, vendor: Oracle Corporation Benchmark Mode Cnt Score

What can explain the huge performance penalty of writing a reference to a heap location?

安稳与你 提交于 2019-12-03 17:43:41
问题 While investigating the subtler consequences of generational garbage collectors on application performance, I have hit a quite staggering discrepancy in the performance of a very basic operation – a simple write to a heap location – with respect to whether the value written is primitive or a reference. The microbenchmark @OutputTimeUnit(TimeUnit.NANOSECONDS) @BenchmarkMode(Mode.AverageTime) @Warmup(iterations = 1, time = 1) @Measurement(iterations = 3, time = 1) @State(Scope.Thread) @Threads

CAS vs synchronized performance

别来无恙 提交于 2019-12-03 17:01:55
问题 I've had this question for quite a while now, trying to read lots of resources and understanding what is going on - but I've still failed to get a good understanding of why things are the way they are. Simply put I'm trying to test how a CAS would perform vs synchronized in contended and not environments. I've put up this JMH test: @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @Warmup(iterations = 5, time = 5, timeUnit = TimeUnit.SECONDS) @Measurement(iterations = 5,

Benchmarking Java HashMap Get (JMH vs Looping)

落花浮王杯 提交于 2019-12-03 13:47:52
My ultimate goal is to create a comprehensive set of benchmarks for several Java primitive collection libraries using the standard Java collections as a baseline. In the past I have used the looping method of writing these kinds of micro-benchmarks. I put the function I am benchmarking in a loop and iterate 1 million+ times so the jit has a chance to warmup. I take the total time of the loop and then divide by the number of iterations to get an estimate for the amount of time a single call to the function I am benchmarking would take. After recently reading about the JMH project and

Is it possible to make java.lang.invoke.MethodHandle as fast as direct invokation?

夙愿已清 提交于 2019-12-03 12:55:25
I'm comparing performance of MethodHandle::invoke and direct static method invokation. Here is the static method: public class IntSum { public static int sum(int a, int b){ return a + b; } } And here is my benchmark: @State(Scope.Benchmark) public class MyBenchmark { public int first; public int second; public final MethodHandle mhh; @Benchmark @OutputTimeUnit(TimeUnit.NANOSECONDS) @BenchmarkMode(Mode.AverageTime) public int directMethodCall() { return IntSum.sum(first, second); } @Benchmark @OutputTimeUnit(TimeUnit.NANOSECONDS) @BenchmarkMode(Mode.AverageTime) public int finalMethodHandle()

Large performance gap between CPU's div instruction and HotSpot's JIT code

南笙酒味 提交于 2019-12-03 12:16:39
Since the beginning of CPUs it has been general knowledge that the integer division instruction is expensive. I went to see how bad it is today, on CPUs which have the luxury of billions of transistors. I found that the hardware idiv instruction still performs significantly worse for constant divisors than the code the JIT compiler is able to emit, which doesn't contain the idiv instruction. To bring this out in a dedicated microbenchmark I've written the following: @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @OperationsPerInvocation(MeasureDiv.ARRAY_SIZE) @Warmup

Extremely slow parsing of time zone with the new java.time API

亡梦爱人 提交于 2019-12-03 08:17:08
问题 I was just migrating a module from the old java dates to the new java.time API, and noticed a huge drop in performance. It boiled down to parsing of dates with timezone (I parse millions of them at a time). Parsing of date string without a time zone ( yyyy/MM/dd HH:mm:ss ) is fast - about 2 times faster than with the old java date, about 1.5M operations per second on my PC. However, when the pattern contains a time zone ( yyyy/MM/dd HH:mm:ss z ), the performance drops about 15 times with the