I\'m curious as to why the IEEE calls a 32 bit floating point number single precision. Was it just a means of standardization, or does \'single\' actually refer to a single \'so
I think it just refers to the number of bits used to represent the floating-point number, where single-precision uses 32 bits and double-precision uses 64 bits, i.e. double the number of bits.