double r = 11.631; double theta = 21.4;
In the debugger, these are shown as 11.631000000000000
11.631000000000000
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.