While looking at a micro-optimization question that I asked yesterday (here), I found something strange: an or
statement in Java is running slightly faste
In the current example, I agree that bounds checking is probably what's getting you (why the JVM doesn't optimize this out is beyond me - the sample code could can deterministically be shown to not overflow...
Another possibility (especially with bigger lookup tables) is cache latency... It depends on the size of the processors' registers and how the JVM chooses to use them - but if the byte array isn't kept totally on processor, then you'll see a performance hit compared to a simple OR as the array is pulled onto the CPU for each check.
It's an interesting piece of code, but 2% is a really small difference. I don't think you can conclude very much from that.
I would guess that the issues is that range checking for the array and if the array lookup is implemented as a method call. That would certainly overshadow 4 straight int compares. Have you looked at the byte code?
According to this article accessing array elements are "2 or 3 times as expensive as accessing non-array elements". Your test shows that the difference may be even bigger.
Loading some random piece of data is generally slower than a little non-branching code.
It all depends upon processor architecture, of course. Your first if statement could be implemented as four instructions. The second may potentially need null pointer checking, bounds checking as well as the load and compare. Also more code means more compile time, and more chance for the optimisation to be impeeded in some manner.