I want to ask which piece of code is more efficient in Java? Code 1:
void f()
{
for(int i = 0 ; i < 99999;i++)
{
for(int j = 0 ; j < 99999;j++)
{
I would prefer the first over the second because it keeps the loop variables out of the way of the rest of the code in the method. Since they're not visible outside of the loop, you can't accidentally refer to them later on.
The other answers are right, too: don't worry about this sort of thing for performance. But do think about it for code readability reasons, and for communicating programmer intent to the next person who comes along. This is much more important than micro-optimization concerns.
Now, that's at the Java language (as in Java Language Specification) level. At the Java Virtual Machine level, it makes absolutely no difference which of those two you use. The locals are allocated in exactly the same way.
If you're not sure, you can always compile it and see what happens. Let's make two classes, f1 and f2, for the two versions:
$ cat f1.java
public class f1 {
void f() {
for(int i = 0 ; i < 99999;i++) {
for(int j = 0 ; j < 99999;j++) {
}
}
}
}
$ cat f2.java
public class f2 {
void f() {
int i, j;
for(i = 0 ; i < 99999;i++) {
for(j = 0 ; j < 99999;j++) {
}
}
}
}
Compile them:
$ javac f1.java
$ javac f2.java
And decompile them:
$ javap -c f1 > f1decomp
$ javap -c f2 > f2decomp
And compare them:
$ diff f1decomp f2decomp
1,3c1,3
< Compiled from "f1.java"
< public class f1 extends java.lang.Object{
< public f1();
---
> Compiled from "f2.java"
> public class f2 extends java.lang.Object{
> public f2();
There's absolutely no difference in the bytecode.
First off, yes your teacher is wrong, the second code is not better. What does better mean anyway? This is because in any normal loop the operations inside the loop body are the part, that are time consuming. So Code 2 is just a micro optimization, which doesn't add enough speed (if any) to justify the bad readability of the code.
No, It does not make a difference at all (speed wise). They both get compiled into the same code. And there's no allocation and deallocation going on like MasterGaurav said.
When the method starts, the JVM allocates enough memory slots for all local variables, and no more allocations occurs until the end of the method.
The only small tiny insignificant difference (other than the scope), is that with the first example, the memory allocated for i & j can be reused for other variables. Therefore, the JVM will allocates fewer memory slots for this method (well, yous saved some bits)
Second is better for speed.
Reason is that in the first case, the scope of j
is limited to the inner for
loop.
As such, the moment, the inner loop is completed, the memory for j
is de-allocated, and again allocated for the next iteration of the outer loop.
Because the memory allocation and deallocation take some time, even though it's on stack, the performance of the first one is slower.
Stop micro-optimizing. These little tricks don't make programs run much faster.
Concentrate on big picture optimizations and writing readable code.
Declare variables where they make sense, and where it helps understanding the semantics of the overall code in the bigger context, not because you think it's faster one place over another.
Beware the perils of micro-benchmarking!!!
I took the code, wrapped a method around the outside, and ran that 10 times in a loop. Results:
50, 3,
3, 0,
0, 0,
0, 0,
....
Without some actual code in the loops, the compilers are able to figure out that the loops do no useful work and optimize them away completely. Given the measured performance, I suspect that this optimization might have been done by javac
.
Lesson 1: Compilers will often optimize away code that does useless "work". The smarter the compiler is, the more likely it is that this sort of thing will happen. If you don't allow for this in the way you code it, a benchmark can be meaningless.
So I then added the following simple calculation in both loops if (i < 2 * j) longK++;
and made the test method return the final value of longK
. Results:
32267, 33382,
34542, 30136,
12893, 12900,
12897, 12889,
12904, 12891,
12880, 12891,
....
We have obviously stopped the compilers optimizing the loop away. But now we see the effects of JVM warmups in (in this case) the first two pairs of loop iterations. The first two pairs of iterations (one method call) are probably run purely in interpreted mode. And it looks the third iteration might actually be running in parallel with the JIT. By the third pair of iterations, we are most likely running pure native code. And from then on, the difference between the timing of the two versions of loop is simply noise.
Lesson 2: always take into account the effect of JVM warmup. This can seriously distort benchmark results, both micro and macro.
Conclusion - once the JVM has warmed up, there is no measurable difference between the two versions of the loop.