I'm assuming we're talking about floating-point arithmetic here (otherwise the "better" average will be terrible).
In the second method, the intermediate result (sum
) will tend to grow without bound, which means you'll eventually lose low-end precision. In the first method, the intermediate result should stay of a roughly similar magnitude to your input data (assuming your input doesn't have an enormous dynamic range). which means that it will retain precision better.
However, I can imagine that as i
gets bigger and bigger, the value of (x - avg) / i
will get less and less accurate (relatively). So it also has its disadvantages.