Simple question - why does the Decimal type define these constants? Why bother?
I\'m looking for a reason why this is defined by the language, not possible uses or effec
Small clarification. They are actually static readonly values and not constants. That has a distinct difference in .Net because constant values are inlined by the various compilers and hence it's impossible to track their usage in a compiled assembly. Static readonly values however are not copied but instead referenced. This is advantageous to your question because it means the use of them can be analyzed.
If you use reflector and dig through the BCL, you'll notice that MinusOne and Zero are only used with in the VB runtime. It exists primarily to serve conversions between Decimal and Boolean values. Why MinusOne is used coincidentally came up on a separate thread just today (link)
Oddly enough, if you look at the Decimal.One value you'll notice it's used nowhere.
As to why they are explicitly defined ... I doubt there is a hard and fast reason. There appears to be no specific performance and only a bit of a convenience measure that can be attributed to their existence. My guess is that they were added by someone during the development of the BCL for their convenience and just never removed.
EDIT
Dug into the const
issue a bit more after a comment by @Paleta. The C# definition of Decimal.One
uses the const
modifier however it is emitted as a static readonly
at the IL level. The C# compiler uses a couple of tricks to make this value virtually indistinguishable from a const
(inlines literals for example). This would show up in a language which recognize this trick (VB.Net recognizes this but F# does not).
Those 3 values arghhh !!!
I think they may have something to do with what I call trailing 1's
say you have this formula :
(x)1.116666 + (y) = (z)2.00000
but x, z are rounded to 0.11 and 2.00, and you are asked to calculate (y).
so you may think y = 2.00 - 1.11
. Actually y equals to 0.88 but you will get 0.89. (there is a 0.01 in difference).
Depends on the real value of x and y the results will varies from -0.01 to +0.01, and in some cases when dealing with a bunch of those trailing 1's, and to facilate things, you can check if the trailing value equals to Decimal.MinusOne / 100
, Decimal.One / 100
or Decimal.Zero / 100
to fix them.
this is how i've made use of them.
My opinion on it is that they are there to help avoid magic numbers.
Magic numbers are basically anywhere in your code that you have an aribtrary number floating around. For example:
int i = 32;
This is problematic in the sense that nobody can tell why i is getting set to 32, or what 32 signifies, or if it should be 32 at all. It's magical and mysterious.
In a similar vein, I'll often see code that does this
int i = 0;
int z = -1;
Why are they being set to 0 and -1? Is this just coincidence? Do they mean something? Who knows?
While Decimal.One
, Decimal.Zero
, etc don't tell you what the values mean in the context of your application (maybe zero means "missing", etc), it does tell you that the value has been deliberately set, and likely has some meaning.
While not perfect, this is much better than not telling you anything at all :-)
Note It's not for optimization. Observe this C# code:
public static Decimal d = 0M;
public static Decimal dZero = Decimal.Zero;
When looking at the generated bytecode using ildasm, both options result in identical MSIL. System.Decimal
is a value type, so Decimal.Zero
is no more "optimal" than just using a literal value.
Some .NET languages do not support decimal literals, and it is more convenient (and faster) in these cases to write Decimal.ONE instead of new Decimal(1).
Java's BigInteger class has ZERO and ONE as well, for the same reason.