Time complexity of the program using recurrence equation

蓝咒 提交于 2020-01-03 07:01:07

问题


I want to find out the time complexity of the program using recurrence equations. That is ..

int f(int x)
{
if(x<1) return 1;
 else  return f(x-1)+g(x); 
}
int g(int x)
{
if(x<2) return 1;
 else return f(x-1)+g(x/2);
}

I write its recurrence equation and tried to solve it but it keep on getting complex

T(n) =T(n-1)+g(n)+c
         =T(n-2)+g(n-1)+g(n)+c+c
         =T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
         =T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
         ……………………….
        ……………………..
        Kth time …..
        =kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n-k)

Let at kth time input become 1
Then n-k=1
         K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+….. .. g(1)

I ‘m not able to solve it further. Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?

Explanation in Anwer 1, looks correct , similar work I did.

The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.

And I came up with this equation , not sure if it is right ??? Please help.

T(n) = 2*T(n-1) + c * logn

回答1:


Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).

First as everyone noted, it is easy to prove that f(x) = Omega(2^x).

Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))

We will show that, for sufficiently large x, there is some constant K > 0 such that

f(x) <= K*H(x), where H(x) = (2 + 1/x)^x

This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).

Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)

according to wolfram alpha (click the link and see series expansion near x = infinity),

H(x) = exp(x ln(2) + 1/2 + O(1/x))

And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that

H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))

and so

[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity

Thus, for sufficiently large x (say x > L) we have the inequality

H(x) >= 2H(x-1) + H(x/2)

Now there is some K (dependent only on L (for instance K = f(2L))) such that

f(x) <= K*H(x) for all x <= 2L

Now we proceed by (strong) induction (you can revert to natural numbers if you want to)

f(x+1) <= 2f(x) + f((x+1)/2)

By induction, the right side is

<= 2*K*H(x) + K*H((x+1)/2)

And we proved earlier that

2*H(x) + H((x+1)/2) <= H(x+1)

Thus f(x+1) <= K * H(x+1)




回答2:


Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)

To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:

Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):

F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1

Then one can prove:

  • F and G are monotonic
  • F > G
  • Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
  • clearly, H > F
  • for all n, H(n) > 2 * H(n-1)
  • hence H(n/2) / H(n-1) -> 0 for sufficiently large n
  • hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
  • hence H in O((2 + epsilon)^n) for any epsilon > 0
  • (Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
  • so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0

Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):

For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal. Replacing the epsilon above by some decreasing function ε(n) of n will in many cases (if ε decreases fast enough, say ε(n)=1/n) yield an upper bound O((2 + ε(n))^n ) = O(2^n)




回答3:


Let f(0)=0 and g(0)=0

From the function we have,

f(x) = f(x - 1) + g(x) 
g(x) = f(x - 1) + g(x/2)

Substituting g(x) in f(x) we get,

f(x) = f(x-1) + f(x -1) + g(x/2)

∴f(x) = 2f(x-1) + g(x/2)

Expanding this we get,

f(x) = 2f(x-1)+f(x/2-1)+f(x/4-1)+ ... + f(1)

Let s(x) be a function defined as follows,

s(x) = 2s(x-1)

Now clearly f(x)=Ω(s(x)).

The complexity of s(x) is O(2x).

Therefore function f(x)=Ω(2x).




回答4:


I think is clear to see that f(n) > 2n, because f(n) > h(n) = 2h(n-1) = 2n.

Now I claim that for every n, there is an ε such that: f(n) < (2+ε)n, to see this, let do it by induction, but to make it more sensible at first I'll use ε = 1, to show f(n) <= 3n, then I'll extend it.

We will use strong induction, suppose for every m < n, f(m) < 3m then we have:

f(n) = 2[f(n-1) + f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)]

but for this part:

A = f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)

we have:

f(n/2) = 2[f(n/2 -1) + f(n/4 -1)+ ... +f(1-1]) ==>

A <= f(n/2)   [1]

So we can rewrite f(n):

f(n) = 2f(n-1) + A < 2f(n-1) +f(n/2),

Now let back to our claim:

f(n) < 2*3^(n-1) + 2*3^(n/2)==>
f(n) < 2*3^(n-1) + 3^(n-1) ==>
f(n) < 3^n.  [2]

By [2], proof of f(n)&in;O(3n) is completed.

But If you want to extend this to the format of (2+ε)n, just use 1 to replace the inequality, then we will have

for ε > 1/(2+ε)n/2-1 → f(n) < (2+ε)n.[3]

Also by [3] you can say that for every n there is an ε such that f(n) < (2+ε)n actually there is constant ε such that for n > n0, f(n)&in;O((2+ε)n). [4]

Now we can use wolfarmalpha like @Knoothe, by setting ε=1/n, then we will have:

f(n) < (2+1/n)n which results on f(n) < e*2n, and by our simple lower bound at start we have: f(n)&in; Θ(2^n).[5]

P.S: I didn't calculate epsilon exactly, but you can do it with pen and paper simply, I think this epsilon is not correct, but is easy to find it, and if is hard tell me is hard, and I'll write it.



来源:https://stackoverflow.com/questions/15595843/time-complexity-of-the-program-using-recurrence-equation

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!