I\'ve been reading some articles on change detection, and all of them say that monomorphic functions are much faster than polymorphic. For example, here is a quote:
The answer lies in the fact that VMs can do heuristic detection of "hot functions", meaning code that is executed hundreds or even thousands of times. If a function's execution count exceeds a predetermined limit, the VMs optimizer might pick up that bit of code and attempt to compile an optimized version based on the arguments passed to the function. In this case, it presumes your function will always be called with the same type of arguments (not necessarily the same objects).
The reason for this is well-documented in this v8-specific guideline document where an integer vs. general number optimization is explained. Say you have:
function add(a, b) { return a + b; }
...and you're always calling this function with integers, this method might be optimized by compiling a function that does integer summation on the CPU, which is fast. If after optimization you feed it a non-integer value, then the VM deoptimizes the function and falls back to the unoptimized version, since it cannot perform integer summation on non-integers and the function would return erroneous results.
In languages where you specify overloaded monomorphic methods you can get around this problem by simply compiling multiple versions of the same method name with different argument signatures which are then optimized on their own. This means that you call different optimized methods because using differenty typed arguments requires you to use a different overloaded method, so there's no question of which method you're using.
You might think that you could keep multiple copies of optimized functions in the VM and check types to determine which optimized compiled function to use. In theory, that would work, if type checking before method invocation were free or very inexpensive. In practice, that usually doesn't turn out to be the case, and you'd probably want to balance things against real-world code to determine the best tradeoff threshold.
Here's a more generalized explanation v8's optimizing compiler in particular (from Google I/O 2012):
https://youtu.be/UJPdhx5zTaw?t=26m26s
In short: functions that are invoked with the same types over and over again are optimized in the JIT compiler, hence faster.