asymptotic-complexity

Difference between Big-O and Little-O Notation

做~自己de王妃 提交于 2019-12-17 02:00:41
问题 What is the difference between Big-O notation O(n) and Little-O notation o(n) ? 回答1: f ∈ O(g) says, essentially For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= k g(x) holds for all x > a. Note that O(g) is the set of all functions for which this condition holds. f ∈ o(g) says, essentially For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > a. Once again, note

Is O(n) greater than O(2^log n)

左心房为你撑大大i 提交于 2019-12-13 16:26:33
问题 I read in a data structures book complexity hierarchy diagram that n is greater than 2 log n . But cannot understand how and why. On using simple examples in power of 2 as n, I get values equal to n. It is not mentioned in book , but I am assuming it to base 2 ( as context is DS complexity) a) Is O(n) > O(pow(2,logn)) ? b) Is O(pow(2,log n)) better than O(n) ? 回答1: Notice that 2 log b n = 2 log 2 n / log 2 b = n (1 / log 2 b) . If log 2 b ≥ 1 (that is, b ≥ 2), then this entire expression is

Adding a log in asymptotic analysis

荒凉一梦 提交于 2019-12-13 05:01:54
问题 Have a problem I'm trying to work through and would very much appreciate some assistance! What's the time complexity of... for (int j = 1 to n) { k = j; while (k < n) { sum += a[k] * b[k]; k += log n; } } The outer for loop runs n times. I'm not sure how to deal with k+= log n in the inner loop. My thought is that it's O(n^2). Adding log(n) to k isn't quite getting an additional n loops, but I think it is less than O(n*log n) would be. Obviously, that's just a guess, and any help in figuring

What does O(O(f(n))) mean?

三世轮回 提交于 2019-12-12 16:40:24
问题 I have the understanding about the Big-Oh notation. But how do I interpret what does O(O(f(n))) mean? Does it mean growth rate of the growth rate? 回答1: x = O(n) basically means x <= kn for some constant k . Thus x = O((O(n)) means x <= pO(n) for some constant p , which means x <= pqn for some constant q . Let k = pq . Then x = O((O(n)) = O(n) . In other words, O(O(f(n))) = O(f(n)) . I am curious, where did you see such notation being used? 回答2: From a Big-Oh point of view: g(n) = O(f(n))

Would this algorithm run in O(n)?

南笙酒味 提交于 2019-12-12 10:38:26
问题 Note : This is problem 4.3 from Cracking the Coding Interview 5th Edition Problem :Given a sorted(increasing order) array, write an algorithm to create a binary search tree with minimal height Here is my algorithm, written in Java to do this problem public static IntTreeNode createBST(int[] array) { return createBST(array, 0, array.length-1); } private static IntTreeNode createBST(int[] array, int left, int right) { if(right >= left) { int middle = array[(left + right)/2; IntTreeNode root =

Python - convert list into dictionary in order to reduce complexity

泪湿孤枕 提交于 2019-12-12 03:43:08
问题 Let's say I have a big list: word_list = [elt.strip() for elt in open("bible_words.txt", "r").readlines()] //complexity O(n) --> proporcional to list length "n" I have learned that hash function used for building up dictionaries allows lookup to be much faster, like so: word_dict = dict((elt, 1) for elt in word_list) // complexity O(l) ---> constant. using word_list , is there a most efficient way which is recommended to reduce the complexity of my code? 回答1: The code from the question does

Compare Big O Notation

故事扮演 提交于 2019-12-12 01:16:24
问题 In n-element array sorting processing takes; in X algorithm: 10 -8 n 2 sec, in Y algoritm 10 -6 n log 2 n sec, in Z algoritm 10 -5 sec. My question is how do i compare them. For example for y works faster according to x, Which should I choose the number of elements ? 回答1: When comparing Big-Oh notations, you ignore all constants: N^2 has a higher growth rate than N*log(N) which still grows more quickly than O(1) [constant]. The power of N determines the growth rate. Example: O(n^3 + 2n + 10)

Analyzing an exponential recursive function

独自空忆成欢 提交于 2019-12-11 17:26:40
问题 I am trying to calculate the complexity of the following exponential recursive function. The isMember() and isNotComputed() functions reduce the number of recursive calls. The output of this code is a set of A[], B[] which are printed at the initial part of recursive function call. Would appreciate any inputs on developing a recursive relationship for this problem which would lead to the analysis of this program. Without the functions isMember(), isNotComputed() this code has the complexity

How to solve the recurrence T(n) = T(n/2) + T(n/4), T(1) = 0, T(2) = 1 is T(n) = Θ(n lg φ ), where φ is the golden ratio?

大兔子大兔子 提交于 2019-12-11 09:47:16
问题 I tried recursion tree method since the master method is not applicable for this recurrence but it seems that it is not the right method also, any help would be appreciated ! 回答1: Either I have an error somewhere in my derivation or there is an error in your statement. You do this by unrolling the recursion: T(n) = T(n/2) + T(n/4) = 2T(n/4) + T(n/8) T(n) = 3T(n/8) + 2T(n/16) T(n) = 5T(n/16) + 3T(n/32) .... T(n) = F(i + 1)T(n/2^(i-1)) + F(i)T(n/2^i) where F(i) if a Fibonacci number. Using

Question about big O and big Omega

风格不统一 提交于 2019-12-11 04:13:17
问题 I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I assume that this means that the efficiency is O(n) + O(n). Does this simplify to 2O(n), O(2n), or O(n)? From what I know about this notation, it would be O(2n) and using the rules of asymptotic notation you can drop the 2, giving an efficiency of O(n). If we were trying to find a lower bound, though