I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the g
Every time we write an algorithm or code we try to analyze its asymptotic complexity. It is different from its time complexity.
Asymptotic complexity is the behavior of execution time of an algorithm while the time complexity is the actual execution time. But some people use these terms interchangeably.
Because time complexity depends on various parameters viz.
1. Physical System
2. Programming Language
3. coding Style
4. And much more ......
The actual execution time is not a good measure for analysis.
Instead we take input size as the parameter because whatever the code is, the input is same. So the execution time is a function of input size.
Following is an example of Linear Time Algorithm
Linear Search
Given n input elements, to search an element in the array you need at most 'n' comparisons. In other words, no matter what programming language you use, what coding style you prefer, on what system you execute it. In the worst case scenario it requires only n comparisons.The execution time is linearly proportional to the input size.
And its not just search, whatever may be the work (increment, compare or any operation) its a function of input size.
So when you say any algorithm is O(log n) it means the execution time is log times the input size n.
As the input size increases the work done(here the execution time) increases.(Hence proportionality)
n Work
2 1 units of work
4 2 units of work
8 3 units of work
See as the input size increased the work done is increased and it is independent of any machine. And if you try to find out the value of units of work It's actually dependent onto those above specified parameters.It will change according to the systems and all.