Why is the “new” keyword so much more efficient than assignment?

后端 未结 2 655
感情败类
感情败类 2020-12-14 21:17

I\'ve got two methods to read in a string, and create Character objects:

static void newChar(String string) {
    int len = string.length();
    System.out.p         


        
相关标签:
2条回答
  • 2020-12-14 21:32

    Well, I'm not sure if Marko was intentional in replicating the original mistake. TL;DR; new instance is not used, gets eliminated. Adjusting the benchmark reverses the result. Don't trust faulty benchmarks, learn from them.

    Here's the JMH benchmark:

    @OutputTimeUnit(TimeUnit.MICROSECONDS)
    @BenchmarkMode(Mode.AverageTime)
    @Warmup(iterations = 3, time = 1)
    @Measurement(iterations = 3, time = 1)
    @Fork(3)
    @State(Scope.Thread)
    public class Chars {
    
        // Source needs to be @State field to avoid constant optimizations
        // on sources. Results need to be sinked into the Blackhole to
        // avoid dead-code elimination
        private String string;
    
        @Setup
        public void setup() {
            string = "12345678901234567890";
            for (int i = 0; i < 10; i++) {
                string += string;
            }
        }
    
        @GenerateMicroBenchmark
        public void newChar_DCE(BlackHole bh) {
            int len = string.length();
            for (int i = 0; i < len; i++) {
                Character c = new Character(string.charAt(i));
            }
        }
    
        @GenerateMicroBenchmark
        public void justChar_DCE(BlackHole bh) {
            int len = string.length();
            for (int i = 0; i < len; i++) {
                Character c = Character.valueOf(string.charAt(i));
            }
        }
    
        @GenerateMicroBenchmark
        public void newChar(BlackHole bh) {
            int len = string.length();
            for (int i = 0; i < len; i++) {
                Character c = new Character(string.charAt(i));
                bh.consume(c);
            }
        }
    
        @GenerateMicroBenchmark
        public void justChar(BlackHole bh) {
            int len = string.length();
            for (int i = 0; i < len; i++) {
                Character c = Character.valueOf(string.charAt(i));
                bh.consume(c);
            }
        }
    
        @GenerateMicroBenchmark
        public void newChar_prim(BlackHole bh) {
            int len = string.length();
            for (int i = 0; i < len; i++) {
                char c = new Character(string.charAt(i));
                bh.consume(c);
            }
        }
    
        @GenerateMicroBenchmark
        public void justChar_prim(BlackHole bh) {
            int len = string.length();
            for (int i = 0; i < len; i++) {
                char c = Character.valueOf(string.charAt(i));
                bh.consume(c);
            }
        }
    }
    

    ...and this is the result:

    Benchmark                   Mode   Samples         Mean   Mean error    Units
    o.s.Chars.justChar          avgt         9       93.051        0.365    us/op
    o.s.Chars.justChar_DCE      avgt         9       62.018        0.092    us/op
    o.s.Chars.justChar_prim     avgt         9       82.897        0.440    us/op
    o.s.Chars.newChar           avgt         9      117.962        4.679    us/op
    o.s.Chars.newChar_DCE       avgt         9       25.861        0.102    us/op
    o.s.Chars.newChar_prim      avgt         9       41.334        0.183    us/op
    

    DCE stands for "Dead Code Elimination", and that is what the original benchmark is suffering from. If we eliminate that effect, in JMH's way it requires us to sink the values into the Blackhole, the score reverses. So, in retrospect, that seems to indicate the new Character() in the original code has major improvement with DCE, while the Character.valueOf is not that successful. I'm not sure we should discuss why, because this has no bearing on the real world use cases, where produced Characters are actually used.

    You can go further on two fronts from here:

    • Get the assembly for the benchmark methods to confirm the conjecture above. See PrintAssembly.
    • Run with more threads. The difference between returning cached Character and instantiating the new one would diminish as we increase the number of threads, and consequently hit the "allocation wall".

    UPD: Following up on Marko's question, it does seem the major impact is about eliminating the allocation itself, whether via the EA or DCE, see *_prim tests.

    UPD2: Looked into the assembly. The same run with -XX:-DoEscapeAnalysis confirms the major effect is due to eliminating the allocation, as the effect of escape analysis:

    Benchmark                   Mode   Samples         Mean   Mean error    Units
    o.s.Chars.justChar          avgt         9       94.318        4.525    us/op
    o.s.Chars.justChar_DCE      avgt         9       61.993        0.227    us/op
    o.s.Chars.justChar_prim     avgt         9       82.824        0.634    us/op
    o.s.Chars.newChar           avgt         9      118.862        1.096    us/op
    o.s.Chars.newChar_DCE       avgt         9       97.530        2.485    us/op
    o.s.Chars.newChar_prim      avgt         9      101.905        1.871    us/op
    

    This proves the original DCE conjecture is incorrect. EA is the major contributor. DCE results are still faster because we do not pay the costs of unboxing, and generally treating the returned value with any respect. Benchmark is faulty in that regard nevertheless.

    0 讨论(0)
  • 2020-12-14 21:36

    TL;DR section

    Good news

    Your measurement does expose a real effect.

    Bad news

    It does so mostly by chance because your benchmark has many technical flaws, and the effect it exposes is probably not the one you have in mind.

    The new Character() approach is faster if and only if HotSpot's Escape Analysis succeeds in proving that the resulting instance can be safely allocated on the stack instead of heap. Therefore the effect is not nearly as general as implied in your question.

    Explanation of effect

    The reason why new Character() is faster is locality of reference: your instance is on the stack and all access to it is via CPU cache hits. When you reuse a cached instance, you must

    1. access a remote static field;
    2. dereference it into a remote array;
    3. dereference an array entry into a remote Character instance;
    4. acces the char contained in that instance.

    Each dereference is a potential CPU cache miss. Furthermore, it forces a part of the cache to be redirected towards those remote locations, causing more cache misses on the input string and/or the stack locations.

    DETAILS

    I have run this code with jmh:

    @OutputTimeUnit(TimeUnit.MICROSECONDS)
    @BenchmarkMode(Mode.AverageTime)
    public class Chars {
      static String string = "12345678901234567890"; static {
        for (int i = 0; i < 10; i++) string += string;
      }
    
      @GenerateMicroBenchmark
      public void newChar() {
        int len = string.length();
        for (int i = 0; i < len; i++) new Character(string.charAt(i));
      }
    
      @GenerateMicroBenchmark
      public void justChar() {
        int len = string.length();
        for (int i = 0; i < len; i++) Character.valueOf(string.charAt(i));
      }
    }
    

    This keeps the essence of your code, but eliminates some systematic errors like warmup and compilation times. These are the results:

    Benchmark              Mode Thr    Cnt  Sec         Mean   Mean error    Units
    o.s.Chars.justChar     avgt   1      3    5       39.062        6.587  usec/op
    o.s.Chars.newChar      avgt   1      3    5       19.114        0.653  usec/op
    

    And this would be my best guess at what's going on:

    • in newChar you are creating a fresh instance of Character. HotSpot's Escape Analysis can prove the instance never escapes, therefore it allows stack allocation, or, in the special case of Character, could eliminate the allocation altogether because the data from it is provably never used;

    • in justChar you involve lookup into the Character cache array, which has some cost.

    UPDATE

    In response to Aleks's criticism, I added some more methods to the benchmark. The main effect remains stable, but we get even more fine-grained details about the lesser optimization effects.

      @GenerateMicroBenchmark
      public int newCharUsed() {
        int len = string.length(), sum = 0;
        for (int i = 0; i < len; i++) sum += new Character(string.charAt(i));
        return sum;
      }
    
      @GenerateMicroBenchmark
      public int justCharUsed() {
        int len = string.length(), sum = 0;
        for (int i = 0; i < len; i++) sum += Character.valueOf(string.charAt(i));
        return sum;
      }
    
      @GenerateMicroBenchmark
      public void newChar() {
        int len = string.length();
        for (int i = 0; i < len; i++) new Character(string.charAt(i));
      }
    
      @GenerateMicroBenchmark
      public void justChar() {
        int len = string.length();
        for (int i = 0; i < len; i++) Character.valueOf(string.charAt(i));
      }
    
      @GenerateMicroBenchmark
      public void newCharValue() {
        int len = string.length();
        for (int i = 0; i < len; i++) new Character(string.charAt(i)).charValue();
      }
    
      @GenerateMicroBenchmark
      public void justCharValue() {
        int len = string.length();
        for (int i = 0; i < len; i++) Character.valueOf(string.charAt(i)).charValue();
      }
    

    DESCRIPTION:

    • the base versions are justChar and newChar;
    • ...Value methods add the charValue call to the base version;
    • ...Used methods add both the charValue call (implicitly) and use the value to preclude any Dead Code Elimination.

    RESULTS:

    Benchmark                   Mode Thr    Cnt  Sec         Mean   Mean error    Units
    o.s.Chars.justChar          avgt   1      3    1      246.847        5.969  usec/op
    o.s.Chars.justCharUsed      avgt   1      3    1      370.031       26.057  usec/op
    o.s.Chars.justCharValue     avgt   1      3    1      296.342       60.705  usec/op
    o.s.Chars.newChar           avgt   1      3    1      123.302       10.596  usec/op
    o.s.Chars.newCharUsed       avgt   1      3    1      172.721        9.055  usec/op
    o.s.Chars.newCharValue      avgt   1      3    1      123.040        5.095  usec/op
    
    • there is evidence of some Dead Code Elimination (DCE) both in justChar and newChar variants, but it is only partial;
    • with newChar variant, adding charValue has no effect so apparently it was DCE'd;
    • with justChar, charValue does have an effect, so seems not to have been eliminated;
    • DCE has a minor overall effect, as witnessed by the stable difference between newCharUsed and justCharUsed.
    0 讨论(0)
提交回复
热议问题