问题
Due to the benchmarking done in other threads (cf. https://stackoverflow.com/a/397617/1408611) it was shown that instanceof in Java 6 is actually quite fast. How is this achieved?
I know that for single inheritance, the fastest idea is having some nested interval encoding where each class maintains a [low,high] interval and an instanceof is simply an interval inclusion test, i.e. 2 integer comparisons. But how is it made for interfaces (as interval inclusion only works for single inheritance)? And how is class loading handled? Loading new subclasses means that a lot of intervals have to be adjusted.
回答1:
AFAIK each class knows all the classes it extends and interfaces it implements. These could be stored in a hash set giving O(1) lookup time.
When code often takes the same branch, the cost can be almost eliminated as the CPU can execute the code in the branch before it has determined whether it should take the branch making the cost next to nothing.
As the micro-benchmark was performed 4 years ago, I expect the latest CPUs and JVMs to be much faster.
public static void main(String... args) {
Object[] doubles = new Object[100000];
Arrays.fill(doubles, 0.0);
doubles[100] = null;
doubles[1000] = null;
for (int i = 0; i < 6; i++) {
testSameClass(doubles);
testSuperClass(doubles);
testInterface(doubles);
}
}
private static int testSameClass(Object[] doubles) {
long start = System.nanoTime();
int count = 0;
for (Object d : doubles) {
if (d instanceof Double)
count++;
}
long time = System.nanoTime() - start;
System.out.printf("instanceof Double took an average of %.1f ns%n", 1.0 * time / doubles.length);
return count;
}
private static int testSuperClass(Object[] doubles) {
long start = System.nanoTime();
int count = 0;
for (Object d : doubles) {
if (d instanceof Number)
count++;
}
long time = System.nanoTime() - start;
System.out.printf("instanceof Number took an average of %.1f ns%n", 1.0 * time / doubles.length);
return count;
}
private static int testInterface(Object[] doubles) {
long start = System.nanoTime();
int count = 0;
for (Object d : doubles) {
if (d instanceof Serializable)
count++;
}
long time = System.nanoTime() - start;
System.out.printf("instanceof Serializable took an average of %.1f ns%n", 1.0 * time / doubles.length);
return count;
}
finally prints
instanceof Double took an average of 1.3 ns
instanceof Number took an average of 1.3 ns
instanceof Serializable took an average of 1.3 ns
if I change the "doubles" with
for(int i=0;i<doubles.length;i+=2)
doubles[i] = "";
I get
instanceof Double took an average of 1.3 ns
instanceof Number took an average of 1.6 ns
instanceof Serializable took an average of 2.2 ns
Note: If I change
if (d instanceof Double)
to
if (d != null && d.getClass() == Double.class)
the performance was the same.
回答2:
I don't know how this is handled, but you could find out by looking at the source code of the JIT compiler, or by dumping the JIT compiled native code for some examples.
And how is class loading handled? Loading new subclasses means that a lot of intervals have to be adjusted.
There are a few situations where the JIT compiler optimizes based on the assumption that the current set of loaded classes is all that there are. If new classes are loaded, I understand that the compiler marks the affected JIT-compiled classes as needing to be recompiled.
来源:https://stackoverflow.com/questions/12386789/how-is-instanceof-implemented-in-modern-jvm-implementations