When displaying the value of a decimal currently with .ToString()
, it\'s accurate to like 15 decimal places, and since I\'m using it to represent dollars and ce
The top-rated answer describes a method for formatting the string representation of the decimal value, and it works.
However, if you actually want to change the precision saved to the actual value, you need to write something like the following:
public static class PrecisionHelper
{
public static decimal TwoDecimalPlaces(this decimal value)
{
// These first lines eliminate all digits past two places.
var timesHundred = (int) (value * 100);
var removeZeroes = timesHundred / 100m;
// In this implementation, I don't want to alter the underlying
// value. As such, if it needs greater precision to stay unaltered,
// I return it.
if (removeZeroes != value)
return value;
// Addition and subtraction can reliably change precision.
// For two decimal values A and B, (A + B) will have at least as
// many digits past the decimal point as A or B.
return removeZeroes + 0.01m - 0.01m;
}
}
An example unit test:
[Test]
public void PrecisionExampleUnitTest()
{
decimal a = 500m;
decimal b = 99.99m;
decimal c = 123.4m;
decimal d = 10101.1000000m;
decimal e = 908.7650m
Assert.That(a.TwoDecimalPlaces().ToString(CultureInfo.InvariantCulture),
Is.EqualTo("500.00"));
Assert.That(b.TwoDecimalPlaces().ToString(CultureInfo.InvariantCulture),
Is.EqualTo("99.99"));
Assert.That(c.TwoDecimalPlaces().ToString(CultureInfo.InvariantCulture),
Is.EqualTo("123.40"));
Assert.That(d.TwoDecimalPlaces().ToString(CultureInfo.InvariantCulture),
Is.EqualTo("10101.10"));
// In this particular implementation, values that can't be expressed in
// two decimal places are unaltered, so this remains as-is.
Assert.That(e.TwoDecimalPlaces().ToString(CultureInfo.InvariantCulture),
Is.EqualTo("908.7650"));
}