I\'m curious as to why the IEEE calls a 32 bit floating point number single precision. Was it just a means of standardization, or does \'single\' actually refer to a single \'so
On the machine I was working on at the time, a float occupied a single 36 bit register. A double occupied two 36 bit registers. The hardware had separate instructions for operating on the 1 register and 2 register versions of the number. I don't know for certain that that's where the terminology came from, but it's possible.
The terminology "double" isn't quite correct, but it's close enough.
A 64 bit float uses 52 of the bits for the fraction instead of the 23 bits used for the fraction in a 32 bit float - it's not really "double", but it does use double the total bits.
The answer to this question is very interesting - you should give it a read.
In addition to the hardware view, on most systems the 32-bit format was used to implement the Fortran "real" type, and the 64 bit format to implement the Fortran "double precision" type.
I think it just refers to the number of bits used to represent the floating-point number, where single-precision uses 32 bits and double-precision uses 64 bits, i.e. double the number of bits.