First my context is that of a compiler writer who needs to convert floating point literals (strings) into float/double values. I haven\'t done any floating point programming
I suggest converting to double first, then cast to float. If the relative difference, (f-d)/f, is greater than float precision (roughly, 1e-7) then there are more digits than what can be safely stored in float.