I work as a programmer, but have no computer science background, so recently I\'ve been following along with the excellent MIT OpenCourseWare intro to Computer Science and Progr
The real important thing to consider is how the time scales as a function of the number of elements. In constant time means that the time remains the same no matter how many elements are involved (layman's explanation).
If the program runs forever, then it doesn't complete in a known amount of time, because it doesn't complete. We are applying the concept of "constant time" to the run of the entire program, not to each individual step.
"constant time" means "time not depending on the amount of input".
Constant time here means not dependent on number of inputs (not the input itself), and if you aren't allowed for or goto, the only way to make it depend on the number of inputs is by conditionals and recursion. Although you could argue that recursion isn't necessary with some debatable solutions. Eg. (in C)
if(ReadInput()) DoSomeThing();
if(ReadInput()) DoSomeThing();
if(ReadInput()) DoSomeThing();
if(ReadInput()) DoSomeThing();
In "constant time" generally means that the time it will take to compute the result is independent of the size of the input.
For example. Calculating the length of a list / vector in most managed languages is done in constant time no matter how large the list is. The size is stored as a separate field and is updated as the list grows and shrinks. But calculating the count is as simple as reading the field.
Calculating the size of a doubly linked list is very often not constant time. The list can often be mutated on both ends and hence there is no central place to store a count. Hence determining the length of the list necessitates visiting it and determining how many elements are in there. So as the list grows so does the time it takes to calculate the answer.
"Constant time" means that the operation will execute in an amount of time (or memory space - that's another thing often measured) independent of the input size. Usually you pick a variable (let's use n
) to indicate the input size.
O(1)
- constant time - running time does not depend on n
O(n)
- linear time - running time is linearly proportional to n
O(n^2)
- quadratic time - running time is proportional to the square of n
These are just a few examples; the possibilities are endless. See the wiki article on complexity
Here's a few specific ways that a program composed of only the operations you mention could take various amounts of time:
int n = // some value
doSomething
doSomething
doSomething
Note how it is three somethings in length, independent of what n
is. O(1)
int n = // some value
def f(n):
if n == 0 return
doSomething
f(n-1)
f(n)
Now we run a something for each value 0..n (linear time, O(n)
)
And we can have a bit of fun -
int n = //some value
def f(n):
if n == 0 return
doSomething
f(n-1)
f(n-1)
What's the running time here? (i.e. how many somethings do we execute?) :)
Constant time effectively means you can give a constant upper bound to how long the program will take to run which isn't affected by any of the input parameters.
Compare that with, say, linear time (for some input n
- which will often actually be the size of the input data rather than a direct value) - which means that the upper bound of the time taken can be expressed as mn + k
for some values of m
and k
.
Note that it doesn't mean a program will take the same amount of time for any input data just because it runs in constant time. For example, consider this method:
int foo(int n)
{
if (n == 0)
{
return 0;
}
int j = n + 1;
int k = j * 2;
return k;
}
That's doing more work in the case where n
is non-zero than in the case where it's zero. However, it's still constant time - at most, it's going to do one comparison, one addition, and one multiplication.
Now compare that with a recursive function:
public int foo(int n)
{
if (n <= 1)
{
return 1;
}
return n * foo(n - 1);
}
This will recurse n
times - so it's linear in n
. You can get much worse than linear, however. Consider this method for computing a Fibonacci number:
public int fib(int n)
{
if (n == 0)
{
return 0;
}
if (n == 1)
{
return 1;
}
return fib(n - 2) + fib(n - 1);
}
That doesn't look much worse than the previous version - but this is now exponential (the upper bound is most easily expressed in terms as O(2n). It's still only using simple comparisons, addition, and function calls though.