I am having to interface some C# code with SQL Server and I want to have exactly as much precision when storing a value in the database as I do with my C# code. I use one of
This chart may help you.
Maximum precision for numerics and decimals in SQL Server is 38 (see here).
The short answer is that C# decimal is a 128 bit floating point precision number and it does not have an exact equivalent in Sql Server.
May be in a future Sql Server version, Microsoft will match both its products and support a straight forward [clr / c# decimal equivalent] of 128 bit floating point precision number, to directly persist on its Database Server.
I had this analysis problem when dealing with very large and very small quantities. In my case, all comparing quantities within my solutions are inside the 64 bit range that [c# double / sql float] can handle. So, I've chosen to use [c# double / sql float] when dealing with quantities, and [c# decimal / sql money] when dealing with the financial part of my solutions.
You can use the SQL Server decimal
type (documentation here). That allows you to specify a precision and scale for the numbers in the database, which you can set to be the same as in your C# code.
Your question, however, seems to be about mapping the .NET decimal type to the SQL Server decimal type. And unfortunately, that's not straightforward. The SQL Server decimal, at maximum precision, covers the range from -1038+1 to 1038-1. In .NET, however, it can cover anywhere from ±10−28 to ±7.9×1028, depending on the number.
The reason is that, internally, .NET treats decimals very similar to floats—it stores it as a pair of numbers, a mantissa and an exponent. If you have an especially large mantissa, you're going to lose some of the precision. Essentially, .NET allows you to mix numbers which have both a high and a low scale; SQL Server requires you to choose it for a given column ahead of time.
In short, you're going to need to decide what the valid range of decimal values are, for your purposes, and ensure both your program and the database can handle it. You need to do that anyway for your program (if you're mixing values like 10−28 with 7.9×1028, you can't rely on the decimal type anyway).
The C#'s decimal type size is 128 bits (16 bytes). In SQL Server if you specify a precision between 20-28 it will use 13 bytes for storage. If you specify precision between 29-38, it will use 17 bytes for storage. For your database to be able to store the whole possible range of values for the .NET's decimal type, you would have to use the SQL Server's decimal type with a minimum precision of 29. Because any precision between 29 and 38 will use the same amount of bytes for storage (17), it is in your interest to select 38 as a precision.
The C#'s decimal type allows you to use it without prior specification exactly what part of the precision you will use before and what part after the decimal separator. You don't have that luxury in SQL Server. The decision for the scale depends entirely on your requirements, but you have to make that decision because if you don't specify a scale it will be 0 by default and you will not be able to store floating point numbers at all.
The above is true in case you have to be sure that you will be able to store the whole .NET decimal range.
If you use C#'s decimal for money in your app, I think that Decimal(19, 4) will be enough.
Fluent nHibernate maps System.Decimal as
decimal(19, 5)
Or you might also want to consider the MONEY
data type: