I\'m curious as to why the IEEE calls a 32 bit floating point number single precision. Was it just a means of standardization, or does \'single\' actually refer to a single \'so
In addition to the hardware view, on most systems the 32-bit format was used to implement the Fortran "real" type, and the 64 bit format to implement the Fortran "double precision" type.