I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the g
But what exactly is O(log n)
What it means precisely is "as n
tends towards infinity
, the time
tends towards a*log(n)
where a
is a constant scaling factor".
Or actually, it doesn't quite mean that; more likely it means something like "time
divided by a*log(n)
tends towards 1
".
"Tends towards" has the usual mathematical meaning from 'analysis': for example, that "if you pick any arbitrarily small non-zero constant k
, then I can find a corresponding value X
such that ((time/(a*log(n))) - 1)
is less than k
for all values of n
greater than X
."
In lay terms, it means that the equation for time may have some other components: e.g. it may have some constant startup time; but these other components pale towards insignificance for large values of n, and the a*log(n) is the dominating term for large n.
Note that if the equation were, for example ...
time(n) = a + blog(n) + cn + dnn
... then this would be O(n squared) because, no matter what the values of the constants a, b, c, and non-zero d, the d*n*n
term would always dominate over the others for any sufficiently large value of n.
That's what bit O notation means: it means "what is the order of dominant term for any sufficiently large n".