What\'s the complexity of a recursive program to find factorial of a number n
? My hunch is that it might be O(n)
.
The time-complexity of recursive factorial would be:
factorial (n) {
if (n = 0)
return 1
else
return n * factorial(n-1)
}
So,
The time complexity for one recursive call would be:
T(n) = T(n-1) + 3 (3 is for As we have to do three constant operations like
multiplication,subtraction and checking the value of n in each recursive
call)
= T(n-2) + 6 (Second recursive call)
= T(n-3) + 9 (Third recursive call)
.
.
.
.
= T(n-k) + 3k
till, k = n
Then,
= T(n-n) + 3n
= T(0) + 3n
= 1 + 3n
To represent in Big-Oh notation,
T(N) is directly proportional to n,
Therefore, The time complexity of recursive factorial is O(n). As there is no extra space taken during the recursive calls,the space complexity is O(N).
When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1)
operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1)
, and you would compute the complexity according to the number of entries in each matrix.
However, when you want to figure out the complexity of an algorithm to compute N!
you have to assume that N
can be arbitrarily large, so it is not valid to assume that multiplication is an O(1)
operation.
If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn)
, but there are faster algorithms.
If you want to analyze the complexity of the easy algorithm for computing N!
factorial(N)
f=1
for i = 2 to N
f=f*i
return f
then at the k-th step in the for loop, you are multiplying (k-1)!
by k
. The number of bits used to represent (k-1)!
is O(k log k)
and the number of bits used to represent k
is O(log k)
. So the time required to multiply (k-1)!
and k
is O(k (log k)^2)
(assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:
sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)
You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n)))
for 2 n-bit numbers.
The other way to improve performance is to use a better algorithm to compute N!
. The fastest one that I know of first computes the prime factorization of N!
and then multiplies all the prime factors.
Assuming you're talking about the most naive factorial algorithm ever:
factorial (n):
if (n = 0) then return 1
otherwise return n * factorial(n-1)
Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n
, and it decrements the value n
until it reaches 0
, meaning the function is called recursively n
times. This is assuming, of course, that both decrementation and multiplication are constant operations.
Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.
If you take multiplication as O(1)
, then yes, O(N)
is correct. However, note that multiplying two numbers of arbitrary length x
is not O(1)
on finite hardware -- as x
tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)
).
You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x
, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N)
, of course.
If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1)
, then there's no way N
can "tend to infinity" and therefore big-O notation is inappropriate.