The full context of the below is on page 8 of What Every Computer Scientist Should Know About Floating-Point Arithmetic. In the below it is stated "In general, when the
It is demonstrated on the same page as the quotes you give, in the paragraph before “In general, when the base is b, a fixed relative error expressed in ulps can wobble by a factor of up to b.”
That paragraph explains that numbers from 1.0000…0 • be to d.ffffdd…d • be, where d is, as I use it here, the digit b-1, have the same ULP, because the ULP is the value of the last digit, and that value of the last digit is determined by the exponent of b (and the number of digits in the significand, which is fixed for the format).
These numbers span a ratio of (almost) b, because d.ffffdd…d is almost b, so d.ffffdd…d / 1 is almost b. But they have the same ULP. Therefore, the magnitude of one ULP relative to the number spans a ratio of b.
And what exactly does it mean by "a fixed relative error expressed in ulps can wobble by a factor of up to b."; if the relative error is fixed then how can it wobble or change?
The error is said to be a fixed number of ULP. But the value of an ULP is not fixed or constant truly fixed or constant relative to floating-point values.
There is just a language issue here. It is not accurate to speak of an ULP as a “fixed relative error”. However, people sometimes express error bounds or error amounts in ULPs because the nature of floating-point quantizes values, and those quanta are ULPs.
ULP is approximately a relative error. It stays in the same range throughout the entire scale of a floating-point format. Consider a three-digit decimal format:
As you see, the value of the ULP relative to floating-point numbers always stays within a small interval. Thus, it serves as an approximation of a relative error.
The fact that our expressions of relative errors in floating-point arithmetic wobble by a factor of b comes from two mathematical facts: