Is it the case that:
The term precision usually refers to the number of significant digits (bits) in the represented value. So precision varies with the number of bits (or digits) in the mantissa of representation. Distance from the origin has no role.
What you say is true about the density of floats on the real line. But in this case the right term is accuracy, not precision. FP numbers of small magnitude are far more accurate that larger ones. This contrasts with integers, which have uniform accuracy over their ranges.
I highly recommend the paper What Every Computer Scientist Should Know About Floating Point Arithmetic, which covers this and much more.