The problem.
Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu.
Code:
double a = 3015.0;
double b = 0.00025298219406977296
Interestingly, if you declare both a and b as floats, you will get exactly 11917835.000000000. So my guess is that there is a conversion to single precision happening somewhere, either in how the constants are interpreted or later on in the calculations.
Either case is a bit surprising, though, considering how simple your code is. You are not using any exotic compiler directives, forcing single precision for all floating point numbers?
Edit: Have you actually confirmed that the compiled program generates an incorrect result? Otherwise, the most likely candidate for the (erroneous) single precision conversion would be the debugger.