问题
Why doesn't Decimal
data type have Epsilon
field?
From the manual, the range of decimal
values is ±1.0 × 10e−28 to ±7.9 × 10e28.
The description of Double.Epsilon:
Represents the smallest positive
Double
value greater than zero
So it seems, Decimal
has such a (non-trivial) value too. But why isn't it easily accessible?
I do understand that +1.0 × 10e−28 is exactly the smallest positive Decimal value greater than zero:
decimal Decimal_Epsilon = new decimal(1, 0, 0, false, 28); //1e-28m;
By the way, there are a couple of questions that give information about Decimal data type's internal representation:
- decimal in c# misunderstanding?
- What's the second minimum value that a decimal can represent?
Here's an example where Epsilon
would be useful.
Lets say I have a weighted sum of values from some sampling set and sum of weights (or count) of samples taken. Now I want to compute the weighted mean value. But I know that the sum of weights (or count) may be still zero. To prevent division by zero I could do if... else...
and check for the zero. Or I could write like this:
T weighted_mean = weighted_sum / (weighted_count + T.Epsilon)
This code is shorter in my eye. Or, alternatively I can skip the + T.Epsilon
and instead initialize with:
T weighted_count = T.Epsilon;
I can do this when I know that the values of real weights are never close to Epsilon
.
And for some data types and use cases this is maybe even faster since it does not involve branches. As I understand, the processors are not able to take both branches for computation, even when the branches are short. And I may know that the zeros occur randomly at 50% rate :=) For Decima
l, the speed aspect is likely not important or even positively useful in the first case though.
My code may be generic (for example, generated) and I do not want to write separate code for decimals. Therefore one would like to see that Decimal
have similar interface as other real-valued types.
回答1:
Contrary to that definition, epsilon is actually a concept used to eliminate the ambiguity of conversion between binary and decimal representations of values. For example, 0.1 in decimal doesn't have a simple binary representation, so when you declare a double as 0.1, it is actually setting that value to an approximate representation in binary. If you add that binary representation number to itself 10 times (mathematically), you get a number that is approximately 1.0, but not exactly. An epsilon will let you fudge the math, and say that the approximate representation of 0.1 added to itself can be considered equivalent to the approximate representation of 0.2.
This approximation that is caused by the nature of the representations is not needed for the decimal value type, which is already a decimal representation. This is why any time you need to deal with actual numbers and numbers which are themselves not approximations (i.e. money as opposed to mass), the correct floating point type to use is decimal and not double.
回答2:
If we just think about the 96 bit mantissa, the Decimal type can be thought of as having an epsilon equal to the reciprocal of a BigInteger constructed with 96 set bits. That is obviously too small a number to represent with current intrinsic value types.
In other words, we would need a "BigReal" value to represent such a small fraction.
And frankly, that is just the "granularity" of the epsilon. We would then need to know the exponent (bits 16-23 of the highest Int32 from GetBits()) to arrive at the "real" epsilon for a GIVEN decimal value.
Obviously, the meaning of "epsilon" for Decimal is variable. You can use the granularity epsilon with the exponent and come up with a specific epsilon for a GIVEN decimal.
But consider the following rather problematic situation:
[TestMethod]
public void RealEpsilonTest()
{
var dec1 = Decimal.Parse("1.0");
var dec2 = Decimal.Parse("1.00");
Console.WriteLine(BitPrinter.Print(dec1, " "));
Console.WriteLine(BitPrinter.Print(dec2, " "));
}
DEC1: 00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00001010
DEC2; 00000000 00000010 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 01100100
Despite the two parsed values seemingly being equal, their representation is not the same!
The moral of the story is... be very careful that you thoroughly understand Decimal before THINKING that you understand it !!!
HINT:
If you want the epsilon for Decimal (theoretically), create a UNION ([StructLayout[LayoutKind.Explicit])
combining Decimal(128 bits) and BigInteger(96 bits) and Exponent(8 bits). The getter for Epsilon would return the correct BigReal value based on granularity epsilon and exponent; assuming, of course, the existence of a BigReal
definition (which I've been hearing for quite some time, will be coming).
The granularity epsilon, by the way, would be a constant or a static field...
static grain = new BigReal(1 / new BitInteger(new byte[] { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF });
HOMEWORK: Should the last byte to BigInteger be 0xFF
or 0x7F
(or something else altogether)?
PS: If all of that sounds rather more complicated than you were hoping, ... consider that comp science pays reasonably well. /-)
回答3:
Smallest number I can calculate for decimal is:
public static decimal DecimalEpsilon = (decimal) (1 / Math.Pow(10, 28));
This is from running the following in a C# Interactive Window:
for (int power = 0; power <= 50; power++) { Console.WriteLine($"1 / 10^{power} = {((decimal)(1 / (Math.Pow(10, power))))}"); }
Which has the following output:
1 / 10^27 = 0.000000000000000000000000001
1 / 10^28 = 0.0000000000000000000000000001
1 / 10^29 = 0
1 / 10^30 = 0
来源:https://stackoverflow.com/questions/11781899/c-sharp-decimal-epsilon