So I was trying to measure the time two different algorithm implementations took to accomplish a given task, and here is the result:
i alg1 alg2
4 0.00
For basic timing, you can use Guava's Stopwatch class (or just grab its source code if you don't want to pull in the whole Guava library). For a more complete benchmarking solution look at Caliper by the same team.
Both of these are based on System.nanoTime()
, which you should prefer over System.currentTimeMillis()
for measuring elapsed time. The basic reason why is that System.currentTimeMillis()
is a "clock" (which tries to return wall-time) whereas System.nanoTime()
is a "timer" (which tries to return time since some arbitrary point).
You want a clock when you're trying to figure out when a single event happened, so you can line it up with your watch or the clock on your wall (or the clock in some other computer). But it's not appropriate for measuring the elapsed time between two events on the same system, since the computer will occasionally adjust its notion of how its own internal clock corresponds to wall-time. For instance, if you do
long timeA = System.currentTimeMillis();
doStuff();
long timeB = System.currentTimeMillis();
System.out.println("Elapsed time: " + (timeB - timeA));
it's possible to get a negative result if NTP adjusts backwards while doStuff()
is executing. System.nanoTime()
, being a timer instead of a clock, should ignore that adjustment thus avoid this problem.
(Note that all of the above is conceptual; unfortunately things can get messy at the implementation level. But this doesn't change the recommendation: System.nanoTime()
is supposed to be the best timer you can get on your platform, and System.currentMilliseconds()
is supposed to be the best clock you can get.)