问题
I have two values, one from user input and another from DB.
var userinput = form["someInput"];
var valuefromDB = GetValue(someNumber);
public float? GetValue(int id){
return (float?) db.table.where(p=> p.id == id).select(p=> p.Value).SingleOrDefault();
}
userinput have value "1" as string, while valuefromDB havevalue 0.001 as float.
so 1 / 0.001 = 1000
but my c# code give me 999.999939 as result;
var final = float.Parse(userinput) / valuefromDB
when i have "2" as user input value, result is correct, 2000...
回答1:
That's because not all decimal numbers can be accurately represented in binary (which is the representation that float uses). The solution is to format the result to the desired number of decimal places, which will cause it to be rounded and displayed "correctly" as a consequence.
Update: To format a float
for display, take a look at this MSDN reference page and this page of examples.
回答2:
For pure precision which is not provided by float use decimal instead.
See What is the difference between Decimal, Float and Double in C#?
来源:https://stackoverflow.com/questions/11216095/float-float-strange-result