In relation to: Convert Decimal to Double
Now, I got to many questions relating to C#\'s floating-point type called decimal
an
C2X will standardize decimal floating point as _DecimalN
: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2573.pdf
In addition, GCC implements decimal floating point as an extension; it currently supports 32-bit, 64-bit, and 128-bit decimal floats.
As other pointed out, there's nothing in C standard(s) such as .NET's decimal, but, if you're working on Windows and have the Windows SDK, it's defined:
DECIMAL structure (wtypes.h)
Represents a decimal data type that provides a sign and scale for a number (as in coordinates.)
Decimal variables are stored as 96-bit (12-byte) unsigned integers scaled by a variable power of 10. The power of 10 scaling factor specifies the number of digits to the right of the decimal point, and ranges from 0 to 28.
typedef struct tagDEC {
USHORT wReserved;
union {
struct {
BYTE scale;
BYTE sign;
} DUMMYSTRUCTNAME;
USHORT signscale;
} DUMMYUNIONNAME;
ULONG Hi32;
union {
struct {
ULONG Lo32;
ULONG Mid32;
} DUMMYSTRUCTNAME2;
ULONGLONG Lo64;
} DUMMYUNIONNAME2;
} DECIMAL;
DECIMAL is used to represent an exact numeric value with a fixed precision and fixed scale.
The origin of this type is Windows' COM/OLE automation (introduced for VB/VBA/Macros, etc. so, it predates .NET, which has very good COM automation support), documented here officially: [MS-OAUT]: OLE Automation Protocol, 2.2.26 DECIMAL
It's also one of the VARIANT type (VT_DECIMAL). In x86 architecture, it's size fits right in the VARIANT (16 bytes).
So in short: yes, there is support for Decimal
floating-point values and arithmetic in the standard C language. Just check out phuclv's comment and S.S. Anne's answer.
In the C programming language, as others have commented, there's no such thing as a Decimal
type, nor are there types implemented like it. The simplest type that is close to it would be double
, which is implemented, most commonly, as an IEEE-754 compliant 64-bit floating-point type. It contains a 1-bit sign, an 11-bit exponent and a 52-bit mantissa/fraction. The following image represents it quite well(from wikipedia):
So you have the following format:
A more detailed explanation can be read here, but you can see that the exponent part is a power of two, which means that there will be imprecisions when dealing with division and multiplication by ten. A simple explanation is because division by anything that isn't a power of two is sure to repeat digits indefinitely in base 2. Example: 1/10 = 0.1(in base 10) = 0.00011001100110011...(in base 2). And, because computers can't store an unlimited amount of zeroes, your operations will have to be truncated/approximated.
In the case of C#'s Decimal
, from the documentation:
The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction.
This last part is important, because instead of being a multiplication by a power of two, it is a division by a power of ten. So you have the following format:
Which, as you can clearly see, is a completely different implementation from above!
For instance, if you wanted to divide by a power of 10, you could do that exactly, because that just involves increasing the exponent part(N). You have to be aware of the limitation of the numbers that can be represented by Decimal
, though, which is at most a measly 7.922816251426434e+28, whereas double
can go up to 1.79769e+308.
Given that there are no equivalents (yet) in C to Decimal
, you may wonder "what do I do?". Well, it depends. First off, is it really important for you to use a Decimal
type? Can't you use a double
? To answer that question, it's helpful to know why that type was created in the first place. Again, from Microsoft's documentation:
The Decimal value type is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors
And, just at the next phrase:
The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding
So you shouldn't think of Decimal
as having "infinite precision", just as being a more appropriate type for calculations that generally need to be made in the decimal system(such as financial ones, as stated above).
If you still want a Decimal
data type in C, you'd have to work in developing a library to support addition, subtraction, multiplication, etc --- Because C doesn't support operator overloading. Also, it still wouldn't have hardware support(e.g. from the x64 instruction set), so all of your operations would be slower than those of double
, for example. Finally, if you still want something that supports a Decimal
in other languages(in your final question), you may look into Decimal TR in C++.
Decimal type in C# is used is used with precision of 28-29 digits and it has size of 16 bytes.There is not even a close equivalent in C to C#.In Java there is a BigDecimal data type that is closest to C# decimal data type.C# decimal gives you numbers like:
+/- someInteger / 10 ^ someExponent
where someInteger is a 96 bit unsigned integer and someExponent is an integer between 0 and 28.
Is Java's BigDecimal the closest data type corresponding to C#'s Decimal?