I know the big-O complexity of this algorithm is O(n^2)
, but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
s
During every iteration you increment i
and decrement j
which is equivalent to just incrementing i
by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example: To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2)
which is the same to O(n^2)
because big O complexity does not care about constants.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n
, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.
You will have exactly n*n/2
loop iterations (or (n*n-1)/2
if n
is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n)
because constant factors "don't count".
big-O complexity ignores coefficients. For example: O(n)
, O(2n)
, and O(1000n)
are all the same O(n)
running time. Likewise, O(n^2)
and O(0.5n^2)
are both O(n^2)
running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j--
has the same effect as i++
). So your running time is O(0.5n^2)
, but that's the same as O(n^2)
when you remove the coefficient.