I\'m curious as to why the IEEE calls a 32 bit floating point number single precision. Was it just a means of standardization, or does \'single\' actually refer to a single \'so
The terminology "double" isn't quite correct, but it's close enough.
A 64 bit float uses 52 of the bits for the fraction instead of the 23 bits used for the fraction in a 32 bit float - it's not really "double", but it does use double the total bits.
The answer to this question is very interesting - you should give it a read.