I\'m still a beginner at programming and I always have more questions than our book or internet searches can answer (unless I missed something). So I apologize in advance if
There are two costs to using float, the obvious one of its limited range and precision, and, less obviously, the more difficult analysis those limitations impose.
It is often relatively easy to determine that double is sufficient, even in cases where it would take significant numerical analysis effort to show that float is sufficient. That saves development cost, and risk of incorrect results if the more difficult analysis is not done correctly.
Float's biggest advantage on many processors is its reduced memory footprint. That translates into more numbers per cache line, and more memory bandwidth in terms of numbers transferred per second. Any gain in compute performance is usually relatively slight - indeed, popular processors do all floating point arithmetic in one format that is wider than double.
It seems best to use double unless two conditions are met - there are enough numbers for their memory footprint to be a significant performance issue, and the developers can show that float is precise enough.