问题
We're learning efficiency analysis in our intro to computer science class and I'm having trouble solving this problem.
Suppose I have a method:
public static void foo(int[][] arr, int num1, int num2) {
for (int i=0;i<arr.length;i++) {
arr[0][i] = num1*i;
}
for (int j=0;j<arr.length;j++) {
arr[i][0] = num2*i
}
}
My first question is if I have a method where there are 3 for-loops, but they are NOT nested, what would the growth rate be?
Also, for this particular made-up method, would the input size for this method be the area of the array? since each for loop goes from i=0 to i= size of array
And finally, if
public static void fee(int[][] arr, double num1, double num2) {
num1=num1*Math.random();
while (num1 == 0) {
num1=num1*Math.random();
}
for (int i=0;i<num1;i++) {
//do something with arr
}
num2=num2*Math.random();
while (num2 == 0) {
num2=num2*Math.random();
}
for (int j=0;j<num2;j++) {
//do something with arr
}
}
How would I go about finding Big-O analysis?
Thanks, I've read multiple resources on finding big-O, but I am still confused.
回答1:
In the first example, where you have 3 non nested for loops (or any kind of loop really), it would simply be O(x + y + z), where x, y, and z are the amount of repetitions of each for loop (assuming constant time inside). However, this plus is only important if we don't know which of the numbers is going to be the biggest. If we know, for example, that x > y and x > z, we could simply say that the algorithm is O(x) (in other words, we know it is linear to x+y+z, but if we don't know which of x, y, and z is the most significant factor, so we can't simply say O(x). Put simply, nested=multiply, not nested=add
In the second example, random numbers are involved. As irrelephant has stated in the comment below, informally, it is O(infinity) in the example you provided (however, I will make the assumption that you meant while num2 != 0
because the other will infinite loop if num2=0 and do nothing otherwise). However, big O is worst case. For random numbers, it is easiest to simply calculate the average time complexity. The nice thing about time complexity is if you calculate slightly worse than it will actually be, no-one cares.
TLDR: average time complexity is around 1000 + log_2 n, where n is num2.
NOTE: although we don't usually include constant factors in our time complexity calculations, when they get large enough, such as 1000, they can become the most significant factor, especially as it is likely that n << 2^1000.
Long explanation: Hence, we will take 0.5 as the average multiplier (although it is actually slightly worse). Each division by 2 either takes 1 bit off the mantissa, or subtracts 1 from the exponent. bit length of mantissa << exponent, so lets consider only exponent. Exponent has 2^11 values, but half are positive and we only want negative, so 2^10 ~= 1000. In order to get it to one in the first place it will take log2 n, so answer = 1000 + log2 n
回答2:
You look at what the program does, and calculate how many primitive operations will be performed depending on your input. Sometimes that calculation is simple, sometimes it's hard. Usually it involves mathematics. Mathematics is tough. Life is tough.
In your first example, can you perhaps figure out how many assignments to arr [0][i] and how many assignments to arr [j][0] are being made?
In hte second example, if num1 * Math.random() is 0, how often will the while loop get executed? (The answer may be an indication of a bug in that code).
来源:https://stackoverflow.com/questions/27096068/big-o-analysis-for-method-with-multiple-parameters