Years ago I learned the hard way about precision problems with floats so I quit using them. However, I still run into code using floats and it make me cringe because I know som
There is in fact one thing where it is still common to use floats aka "single precision" with 32 bits: Graphic applications and printing.
The other reason are graphic cards with their GPUs. The smaller the datatype, the faster the operation because less bits must be transported. Integer datatypes have problems with High Dynamic Range Images: The eye is able to function over a luminosity range of 1: 10^13 and discerns ca. 4000 levels. So while integer datatypes can store the number of levels they are unable to store the background brightness while floats have no problems with that. In fact IEEE 754R allows a new "half precision" float with 16 bits and a 10 bit mantissa which loses some precision but would allow an even greater speed. OpenGL and DirectX e.g. use floats extensively. The eye is very forgiving with artifacts, so no problem there.
All other media building on graphics are inheriting floats as convienient measure. The mantissa has 24 bits allowing therefore 2^24 = 16,7 millions consecutive steps. If you have a printer with 2000 dpi resolution, you still are able to print 213x213 m sheets. More than enough precision.