What does O(log n) mean exactly?

后端 未结 30 2401
执念已碎
执念已碎 2020-11-22 01:19

I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the g

相关标签:
30条回答
  • 2020-11-22 01:45

    What's logb(n)?

    It is the number of times you can cut a log of length n repeatedly into b equal parts before reaching a section of size 1.

    0 讨论(0)
  • 2020-11-22 01:45

    Actually, if you have a list of n elements, and create a binary tree from that list (like in the divide and conquer algorithm), you will keep dividing by 2 until you reach lists of size 1 (the leaves).

    At the first step, you divide by 2. You then have 2 lists (2^1), you divide each by 2, so you have 4 lists (2^2), you divide again, you have 8 lists (2^3)and so on until your list size is 1

    That gives you the equation :

    n/(2^steps)=1 <=> n=2^steps <=> lg(n)=steps

    (you take the lg of each side, lg being the log base 2)

    0 讨论(0)
  • 2020-11-22 01:46

    But what exactly is O(log n)? For example, what does it mean to say that the height of a >complete binary tree is O(log n)?

    I would rephrase this as 'height of a complete binary tree is log n'. Figuring the height of a complete binary tree would be O(log n), if you were traversing down step by step.

    I cannot understand how to identify a function with a logarithmic time.

    Logarithm is essentially the inverse of exponentiation. So, if each 'step' of your function is eliminating a factor of elements from the original item set, that is a logarithmic time algorithm.

    For the tree example, you can easily see that stepping down a level of nodes cuts down an exponential number of elements as you continue traversing. The popular example of looking through a name-sorted phone book is essentially equivalent to traversing down a binary search tree (middle page is the root element, and you can deduce at each step whether to go left or right).

    0 讨论(0)
  • 2020-11-22 01:47

    These 2 cases will take O(log n) time

    case 1: f(int n) {
          int i;
          for (i = 1; i < n; i=i*2)
            printf("%d", i);
        }
    
    
     case 2  : f(int n) {
          int i;
          for (i = n; i>=1 ; i=i/2)
            printf("%d", i);
        }
    
    0 讨论(0)
  • 2020-11-22 01:49

    O(log n) refers to a function (or algorithm, or step in an algorithm) working in an amount of time proportional to the logarithm (usually base 2 in most cases, but not always, and in any event this is insignificant by big-O notation*) of the size of the input.

    The logarithmic function is the inverse of the exponential function. Put another way, if your input grows exponentially (rather than linearly, as you would normally consider it), your function grows linearly.

    O(log n) running times are very common in any sort of divide-and-conquer application, because you are (ideally) cutting the work in half every time. If in each of the division or conquer steps, you are doing constant time work (or work that is not constant-time, but with time growing more slowly than O(log n)), then your entire function is O(log n). It's fairly common to have each step require linear time on the input instead; this will amount to a total time complexity of O(n log n).

    The running time complexity of binary search is an example of O(log n). This is because in binary search, you are always ignoring half of your input in each later step by dividing the array in half and only focusing on one half with each step. Each step is constant-time, because in binary search you only need to compare one element with your key in order to figure out what to do next irregardless of how big the array you are considering is at any point. So you do approximately log(n)/log(2) steps.

    The running time complexity of merge sort is an example of O(n log n). This is because you are dividing the array in half with each step, resulting in a total of approximately log(n)/log(2) steps. However, in each step you need to perform merge operations on all elements (whether it's one merge operation on two sublists of n/2 elements, or two merge operations on four sublists of n/4 elements, is irrelevant because it adds to having to do this for n elements in each step). Thus, the total complexity is O(n log n).

    *Remember that big-O notation, by definition, constants don't matter. Also by the change of base rule for logarithms, the only difference between logarithms of different bases is a constant factor.

    0 讨论(0)
  • 2020-11-22 01:50

    The explanation below is using the case of a fully balanced binary tree to help you understand how we get logarithmic time complexity.

    Binary tree is a case where a problem of size n is divided into sub-problem of size n/2 until we reach a problem of size 1:

    And that's how you get O(log n) which is the amount of work that needs to be done on the above tree to reach a solution.

    A common algorithm with O(log n) time complexity is Binary Search whose recursive relation is T(n/2) + O(1) i.e. at every subsequent level of the tree you divide problem into half and do constant amount of additional work.

    0 讨论(0)
提交回复
热议问题