If the time complexity of my program is,say O(n^2),How do I express running time in terms of seconds for a large value of
You need to roughly know how much one of your base tasks takes in order to have an estimation of the running task for different Algorithms.
As an example, let's imagine your base task is
void func(){sleep(1)};
now you know that a O(1) complexity algorithm will yield to just one call to func(), which will take 1s.
Looking at other examples:
O(1) -> 1 * 1s
O(N) -> N * 1s
O(N2) -> (N^2) * 1s
Without having a rough estimation of your task's execution time, it is impossible to give a precise answer.
There is no way to calculate or estimate the running time of a some piece of code based on its Big-O rating.
Big-O tells you how a method scales in terms of operations to perform. It has no idea how long one operation take to execute. Additionally, CPUs may be good or bad at executing some of the operations in parallel which makes it even harder.
The only way to figure out if you have a performance bottleneck is to do the following:
If you also know the Big-O rating of that code you can use that to decide if the bottleneck is going to be exponentially worse if you, as an example, double the number of items to process.
Big O is already pretty rough. Still, that means that you will execute something approximately 10^12 times, or a trillion iterations.
For example, an i7 4770K has 127,273 million instructions per second, at 3.9 GHz. This is a pretty meaningless metric generally, but since we are doing this very roughly, it will have to do.
At one instruction per iteration, it will apparently take around 8 seconds to complete.
In reality, you likely need a couple of instructions per iteration, but you also probably have less iterations (such as n/2). If you were to give us an example code, I could get a better guess.
Easiest way to get the running time is to benchmark your algorithm with given n
(run several times and use the mean). If the time is longer than what you have allocated for estimating the runtime, then you need to approximate.
You can get the exact runtime of algorithm with O(n^x)
(polynomial) complexity using the formula: c_x * n^x + c_x-1 * n^(x-1) ... + c_1 * n + c_0
where the multipliers c_x ... c_0
may be any value. They depend on the specifics of the algorithm, your cpu, scheduler state, and a lot of other things.
You can estimate these multipliers by running your code with values of n
small enough to not take more time that you have allocated for the estimation and creating statistics. You can use a polynomial regression model to estimate the multipliers. With the estimation, you can apply the multipliers to the above formula for approximation of the runtime with any value of n
. The accuracy of the estimation depends on how much statistics you've gathered and how much larger n
you're estimating for compared to the values of n
used for the statistics and the order of the complexity (higher than quadratic may not be very useful).
The polynomial regression method itself is way beyond the scope of this question. I recommend reading a statistics text book for it. Here's a tutorial
Of course, you have to understand that these measurements and estimations only apply to your implementation of the algorithm running on your hardware and are not comparable to measurements of implementations of other algorithms or measurements on other hardware.