Determining Big O Notation

前端 未结 5 2135
后悔当初
后悔当初 2020-12-17 22:50

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don\'t know how to \"determine the complexity given a piece of code\".

相关标签:
5条回答
  • 2020-12-17 23:18

    a

    n=6;
    cout<<n<<endl;
    

    Constant time, O(1). This means as n increases from 1 to infinity, the amount of time needed to execute this statement does not increase. Each time you increment n, the amount of time needed does not increase.

    b

    n=16;
    for (i=0; i<n; i++)
        cout<<i<<endl;
    

    Linear Time, O(n). This means that as n increases from 1 to infinity, the amount of time needed to execute this statement increases linearly. Each time you increment n, the amount of additional time needed from the previous remains constant.

    c

    i=6;
    n=23;
    while (i<n) {
        cout<<i-6<<endl;
        i++;
    }
    

    Linear Time, O(n), same as example 2 above.

    d

    int a[ ] = {1, 3, 5, 7, 9, 11, 13, 15, 17, 19};
    n=10;
    for (i=0; i<n; i++)
        a[i]=a[i]*2;
    for (i=9; i>=0; i--)
        cout<<a[i]<<endl;
    

    Linear time, O(n). As n increases from 1 to infinity, the amount of time needed to execute these statements increase linearly. The linear line is twice as steep as example 3, however Big O Notation does not concern itself with how steep the line is, it's only concerned with how the time requirements grow. The two loops require linearly growing amount of time as n increases.

    e

    sum=0;
    n=6;
    k=pow(2,n);
    for (i=0;i<k;i++)
        sum=sum+k;
    

    Create a graph of how many times sum=sum+k is executed given the value n:

    n     number_of_times_in_loop
    1     2^1 = 2
    2     2^2 = 4
    3     2^3 = 8
    4     2^4 = 16
    5     2^5 = 32
    6     2^6 = 64
    

    As n goes from 1 to infinity, notice how the number of times we are in the loop exponentially increases. 2->4->8->16->32->64. What would happen if I plugged in n of 150? The number of times we would be in the loop becomes astronomical.

    This is Exponential time: O(2^n) (see here) denotes an algorithm whose growth will double with each additional element in the input data set. Plug in a large sized n at your own peril, you will be waiting hours or years for the calculation to complete for a handful of input items.

    Why do we care?

    As computer scientists, we are interested in properly understanding BigO notation because we want to be able to say things like this with authority and conviction:

    "Jim's algorithm for calculating the distance between planets takes exponential time. If we want to do 20 objects it takes too much time, his code is crap because I can make one in linear time."

    And better yet, if they don't like what they hear, you can prove it with Math.

    0 讨论(0)
  • 2020-12-17 23:19

    To understand the full mathematical definition I recommend Wikipedia. For simple purposes big-oh is an upper bound to algorithm, given a routine how many times does it iterate before finishing given a length of n. we call this upper bound O(n) or big oh of n.

    In code accessing a member of a simple array in c++ is O(1). It is one operation regardless of how large the array is.

    A linear iteration through an array in a for loop is O(n)

    Nested for loops are O(n^2) or O(n^k) if have more than one nested for loop

    Recursion with divide and conquer (heaps, binary trees etc) is O(lg n) or O(n lg n) depending on the operation.

    0 讨论(0)
  • 2020-12-17 23:27

    These examples are fairly simple. First what you have to do is to determine the main (simple) operation in the code and try to express the number of invocations of this operation as a function of input.

    To be less abstract:

    a.)

    This code always runs in a constant time. This time is dependant on the computer, I/O latency, etc. - but it is almost not dependant on the value of n.

    b.)

    This time a piece of code inside a loop is executed several times. If n is two times bigger, what can you say about the number of iterations?

    c.)

    Again some code inside a loop. But this time the number of iterations is less than n. But if n is sufficiently big, do you simmilarity to b.)?

    d.)

    This code is interesting. The operation inside a first loop is more sophisticated, but again it takes more-or-less constant amount of time. So how many times is it executed in relation to n? Once again compare with b.)

    The second loop is there only to trick you. For small n it might actually take more time than the first one. However O(n) notation always takes high n values into account.

    5.)

    The last piece of code is actually pretty simple. The number of simple operations inside a loop is equal to n^2. Add 1 to n and you'll get twice as much operations.

    0 讨论(0)
  • 2020-12-17 23:30

    Big O indicates the order of the complexity of your algorithm.

    Basic things :

    • This complexity is measured regarding to the entry size
    • You choose a unit operation (usually affectation or comparison)
    • You count how much time this operation is called
    • A constant term or constant factor is usually ignored when using complexity so if the number of operation is 3*n^3 + 12 it's simplified to n^3 also marked O(n^3)

    a.) Will just run once, no loop, complexity is trivial here O(1)

    b.) Call n times in the loop: O(n)

    c.) Here we choose to analyze n (because it's usually the incrementing variable in an algorithm). The number of calls is n - 6 so this is O(n).

    d.) Let's suppose here that 10 (n) is the size of your array and nine (i) this size minus one. For each value to n, we have to go from 0 to n then n-1 to 0. n * (n-1) operations, technically: O(n * 2) which some people approximate as O(n). Both are called Linear Time, what is different is the slope of the line which BigO doesn't care about.

    e.) The loop goes from 0 to the pow(2, n), which is 1 to 2^n, summarized as O(2^n)

    0 讨论(0)
  • 2020-12-17 23:40

    Assuming you don't count the cout as part of your Big-O measurement.

    a) O(1) you can perform the integer assignment in constant time.
    b) O(n) because it takes n operations for the loop.
    c) O(n - c) = O(n) constants disappear in Big-O.
    d.1) O(2*n) = O(n) two linear time algorithms end up being linear time.
    d.2) If n grows with pow(2, n) = 2^n, then the number of operations are O(2^n); however if n is constant it would grow with O(k), where k = 2^6 = 64, which would be linear.

    0 讨论(0)
提交回复
热议问题