Ok, the answer lies not in overflowing the sum (since that is ruled out), but as Oli said in "losing the low-end precision". If the average of the numbers you are summing is much larger than the distance of each number from the average, the 2nd approach will lose mantissa bits. Since the first approach is only looking at the relative values, it doesn't suffer from that problem.
So any list of numbers that are greater than, say, 60 million (for single-precision floating point) but the values don't vary by more than 10 or so should show you the behavior.
If you are using double-precision floats, the average value should be much higher. Or the deltas much lower.