Why is getting a member faster than calling hasOwnProperty?

后端 未结 1 642
你的背包
你的背包 2021-02-02 09:33

I\'m writing a JS object that needs to perform really basic key-value caching on string:function pairs. The class runs on the client and caches partially-compiled templates for

相关标签:
1条回答
  • 2021-02-02 10:12

    The secret behind x[k] performance on Chrome (V8) is in this chunk of assembly from ic-ia32.cc. In short: V8 maintains a global cache that maps a pair of (map, name) to an index specifying location of the property. Map is an internal name used in V8 for hidden classes other JS engines call them differently (shapes in SpiderMonkey and structures in JavaScriptCore). This cache is populated only for own properties of fast mode objects. Fast mode is the representation of an object that does not use dictionary to store properties, but instead is more like a C-structure with properties occupying fixed offsets.

    As you can see once the cache is populated the fist time your loop is executed, it will always be hit on the subsequent repetitions, meaning that the property lookup will always be handled inside the generated code and will never enter runtime because all properties benchmark is looking up actually exist on the object. If you profile the code you will see the following line:

    256   31.8%   31.8%  KeyedLoadIC: A keyed load IC from the snapshot
    

    and dumping native code counters would show this (actual number depends on the number of iterations you repeat the benchmark):

    | c:V8.KeyedLoadGenericLookupCache                               |    41999967 |
    

    which illustrates that cache is indeed being hit.

    Now V8 does not actually use the same cache for either x.hasOwnProperty(k) or k in x, in fact it does not use any cache and always end up calling runtime, e.g. in the profile for hasOwnProperty case you will see a lot of C++ methods:

    339   17.0%   17.0%  _ZN2v88internal8JSObject28LocalLookupRealNamedPropertyEPNS0_4NameEPNS0_12LookupResultE.constprop.635
    254   12.7%   12.7%  v8::internal::Runtime_HasLocalProperty(int, v8::internal::Object**, v8::internal::Isolate*)
    156    7.8%    7.8%  v8::internal::JSObject::HasRealNamedProperty(v8::internal::Handle<v8::internal::JSObject>, v8::internal::Handle<v8::internal::Name>)
    134    6.7%    6.7%  v8::internal::Runtime_IsJSProxy(int, v8::internal::Object**, v8::internal::Isolate*)
     71    3.6%    3.6%  int v8::internal::Search<(v8::internal::SearchMode)1, v8::internal::DescriptorArray>(v8::internal::DescriptorArray*, v8::internal::Name*, int)
    

    and the main problem here is not even that these are C++ methods and not handwritten assembly (like KeyedLoadIC stub) but that these methods are performing the same lookup again and again without caching the outcome.

    Now the implementations can be wildly different between engines, so unfortunately I can't give complete explanation of what happens on other engines, but my guess would be that any engine that shows faster x[k] performance is employing similar cache (or represents x as a dictionary, which would also allow fast probing in the generated code) and any engine that shows equivalent performance between cases either does not use any caching or employs that same cache for all three operations (which would make perfect sense).

    If V8 probed the same cache before going to runtime for hasOwnProperty and in then on your benchmark you would have seen equivalent performance between cases.

    0 讨论(0)
提交回复
热议问题