Consider the following two programs that perform the same computations in two different ways:
// v1.c
#include
#include
int main(v
Ignore the loop structure all together, and only think about the sequence of calls to sin
. v1
does the following:
x <-- sin(x)
x <-- sin(x)
x <-- sin(x)
...
that is, each computation of sin( )
cannot begin until the result of the previous call is available; it must wait for the entirety of the previous computation. This means that for N calls to sin
, the total time required is 819200000 times the latency of a single sin
evaluation.
In v2
, by contrast, you do the following:
x[0] <-- sin(x[0])
x[1] <-- sin(x[1])
x[2] <-- sin(x[2])
...
notice that each call to sin
does not depend on the previous call. Effectively, the calls to sin
are all independent, and the processor can begin on each as soon as the necessary register and ALU resources are available (without waiting for the previous computation to be completed). Thus, the time required is a function of the throughput of the sin function, not the latency, and so v2
can finish in significantly less time.
I should also note that DeadMG is right that v1
and v2
are formally equivalent, and in a perfect world the compiler would optimize both of them into a single chain of 100000 sin
evaluations (or simply evaluate the result at compile time). Sadly, we live in an imperfect world.