I\'d like to be able to get the bits from a System.Decimal
value and then convert that to the string representation of the value, much like Decimal.ToStri
You can use the Decimal constructor Decimal(Int32[]) to convert your value back:
Decimal Constructor (Int32[])
Initializes a new instance of Decimal to a decimal value represented in binary and contained in a specified array.
Afterwards, you can use ToString
if you want.
Example:
decimal d = 1403.45433M;
int[] nDecimalBits = decimal.GetBits(d);
decimal d2 = new decimal(nDecimalBits);
string s = d2.ToString();
It's does not algoritm, but i suppose it should help.
Decimal bits structure:
The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.
Since you cannot use ToString()
, you might want to check out how the mono developers implemented this:
The entry point is NumberToString(string, decimal, IFormatProvider).
The interesting part is InitDecHexDigits(uint, ulong), which gets called like this
InitDecHexDigits ((uint)bits [2], ((ulong)bits [1] << 32) | (uint)bits [0]);
and does the "bit juggling and shifting" thing to convert the three integers into binary coded decimals (_val1 to _val4), which can then be (trivially) converted into a string.
(Don't get confused by the fact that they call it "hex representation". It's binary coded decimal digits.)
Is there any major reason you can't just use the decimal constructor?
new decimal(nDecimalBits).ToString();