What does O(log n) mean exactly?

后端 未结 30 2403
执念已碎
执念已碎 2020-11-22 01:19

I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the g

相关标签:
30条回答
  • 2020-11-22 01:57

    If you are looking for a intuition based answer I would like to put up two interpretations for you.

    1. Imagine a very high hill with a very broad base as well. To reach the top of the hill there are two ways: one is a dedicated pathway going spirally around the hill reaching at the top, the other: small terrace like carvings cut out to provide a staircase. Now if the first way is reaching in linear time O(n), the second one is O(log n).

    2. Imagine an algorithm, which accepts an integer, n as input and completes in time proportional to n then it is O(n) or theta(n) but if it runs in time proportion to the number of digits or the number of bits in the binary representation on number then the algorithm runs in O(log n) or theta(log n) time.

    0 讨论(0)
  • 2020-11-22 01:58

    Many good answers have already been posted to this question, but I believe we really are missing an important one - namely, the illustrated answer.

    What does it mean to say that the height of a complete binary tree is O(log n)?

    The following drawing depicts a binary tree. Notice how each level contains double the number of nodes compared to the level above (hence binary):

    Binary search is an example with complexity O(log n). Let's say that the nodes in the bottom level of the tree in figure 1 represents items in some sorted collection. Binary search is a divide-and-conquer algorithm, and the drawing shows how we will need (at most) 4 comparisons to find the record we are searching for in this 16 item dataset.

    Assume we had instead a dataset with 32 elements. Continue the drawing above to find that we will now need 5 comparisons to find what we are searching for, as the tree has only grown one level deeper when we multiplied the amount of data. As a result, the complexity of the algorithm can be described as a logarithmic order.

    Plotting log(n) on a plain piece of paper, will result in a graph where the rise of the curve decelerates as n increases:

    0 讨论(0)
  • 2020-11-22 01:58

    But what exactly is O(log n)

    What it means precisely is "as n tends towards infinity, the time tends towards a*log(n) where a is a constant scaling factor".

    Or actually, it doesn't quite mean that; more likely it means something like "time divided by a*log(n) tends towards 1".

    "Tends towards" has the usual mathematical meaning from 'analysis': for example, that "if you pick any arbitrarily small non-zero constant k, then I can find a corresponding value X such that ((time/(a*log(n))) - 1) is less than k for all values of n greater than X."


    In lay terms, it means that the equation for time may have some other components: e.g. it may have some constant startup time; but these other components pale towards insignificance for large values of n, and the a*log(n) is the dominating term for large n.

    Note that if the equation were, for example ...

    time(n) = a + blog(n) + cn + dnn

    ... then this would be O(n squared) because, no matter what the values of the constants a, b, c, and non-zero d, the d*n*n term would always dominate over the others for any sufficiently large value of n.

    That's what bit O notation means: it means "what is the order of dominant term for any sufficiently large n".

    0 讨论(0)
  • 2020-11-22 01:59

    O(log N) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on.

    ​It is O(log n) when we do divide and conquer type of algorithms e.g binary search. Another example is quick sort where each time we divide the array into two parts and each time it takes O(N) time to find a pivot element. Hence it N O(log N)

    0 讨论(0)
  • 2020-11-22 02:00

    You can think of O(log N) intuitively by saying the time is proportional to the number of digits in N.

    If an operation performs constant time work on each digit or bit of an input, the whole operation will take time proportional to the number of digits or bits in the input, not the magnitude of the input; thus, O(log N) rather than O(N).

    If an operation makes a series of constant time decisions each of which halves (reduces by a factor of 3, 4, 5..) the size of the input to be considered, the whole will take time proportional to log base 2 (base 3, base 4, base 5...) of the size N of the input, rather than being O(N).

    And so on.

    0 讨论(0)
  • 2020-11-22 02:01

    Every time we write an algorithm or code we try to analyze its asymptotic complexity. It is different from its time complexity.

    Asymptotic complexity is the behavior of execution time of an algorithm while the time complexity is the actual execution time. But some people use these terms interchangeably.

    Because time complexity depends on various parameters viz.
    1. Physical System
    2. Programming Language
    3. coding Style
    4. And much more ......

    The actual execution time is not a good measure for analysis.


    Instead we take input size as the parameter because whatever the code is, the input is same. So the execution time is a function of input size.

    Following is an example of Linear Time Algorithm


    Linear Search
    Given n input elements, to search an element in the array you need at most 'n' comparisons. In other words, no matter what programming language you use, what coding style you prefer, on what system you execute it. In the worst case scenario it requires only n comparisons.The execution time is linearly proportional to the input size.

    And its not just search, whatever may be the work (increment, compare or any operation) its a function of input size.

    So when you say any algorithm is O(log n) it means the execution time is log times the input size n.

    As the input size increases the work done(here the execution time) increases.(Hence proportionality)

          n      Work
          2     1 units of work
          4     2 units of work
          8     3 units of work
    

    See as the input size increased the work done is increased and it is independent of any machine. And if you try to find out the value of units of work It's actually dependent onto those above specified parameters.It will change according to the systems and all.

    0 讨论(0)
提交回复
热议问题