The problem.
Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu.
Code:
double a = 3015.0;
double b = 0.00025298219406977296
If you need precise math, don't use floating point.
Do yourself a favor and get a BigNum library with rational number support.
I'd guess you're printing out the number without specifying a precision. Try this:
#include <iostream>
#include <iomanip>
int main() {
double a = 3015.0;
double b = 0.00025298219406977296;
double f = a/b;
std::cout << std::fixed << std::setprecision(15) << f << std::endl;
return 0;
}
This produces:
11917834.814763514000000
Which looks correct to me. I'm using VC++ 2008 instead of 2005, but I'd guess the difference is in your code, not the compiler.
Are you sure you're examining the value of f right after the fstp instruction? If you've got optimizations turned on perhaps the watch window could be showing a value taken at some later point (this seems a bit plausible as you say you're looking at the fractional part of f later - does some instruction wind up masking it out somehow?)
Interestingly, if you declare both a and b as floats, you will get exactly 11917835.000000000. So my guess is that there is a conversion to single precision happening somewhere, either in how the constants are interpreted or later on in the calculations.
Either case is a bit surprising, though, considering how simple your code is. You are not using any exotic compiler directives, forcing single precision for all floating point numbers?
Edit: Have you actually confirmed that the compiled program generates an incorrect result? Otherwise, the most likely candidate for the (erroneous) single precision conversion would be the debugger.
Are you using directx in your program anywhere as that causes the floating point unit to get switched to single precision mode unless you specifically tell it not to when you create the device and would cause exactly this