Why does integer division in C# return an integer and not a float?

前端 未结 8 881
隐瞒了意图╮
隐瞒了意图╮ 2020-11-21 04:32

Does anyone know why integer division in C# returns an integer and not a float? What is the idea behind it? (Is it only a legacy of C/C++?)

In C#:

fl         


        
相关标签:
8条回答
  • 2020-11-21 05:10

    While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.

    First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

    Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

    Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.

    0 讨论(0)
  • 2020-11-21 05:10

    Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:

    Manual:

    If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.

    Thus, since you declare 13 as integer, integer division will be performed:

    Manual:

    For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.

    The predefined division operators are listed below. The operators all compute the quotient of x and y.

    Integer division:

    int operator /(int x, int y);
    uint operator /(uint x, uint y);
    long operator /(long x, long y);
    ulong operator /(ulong x, ulong y);
    

    And so rounding down occurs:

    The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.

    If you do the following:

    int x = 13f / 4f;
    

    You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.

    If you want the division to be a floating-point division, you'll have to make the result a float:

    float x = 13 / 4;
    

    Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).

    0 讨论(0)
  • 2020-11-21 05:13

    Its just a basic operation.

    Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.

    9 / 6 == 1  //true
    9 % 6 == 3 // true
    

    The /-operator in combination with the %-operator are used to retrieve those values.

    0 讨论(0)
  • 2020-11-21 05:13

    Might be useful:

    double a = 5.0/2.0;   
    Console.WriteLine (a);      // 2.5
    
    double b = 5/2;   
    Console.WriteLine (b);      // 2
    
    int c = 5/2;   
    Console.WriteLine (c);      // 2
    
    double d = 5f/2f;   
    Console.WriteLine (d);      // 2.5
    
    0 讨论(0)
  • 2020-11-21 05:21

    See C# specification. There are three types of division operators

    • Integer division
    • Floating-point division
    • Decimal division

    In your case we have Integer division, with following rules applied:

    The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.

    I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.

    0 讨论(0)
  • 2020-11-21 05:21

    The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).

    var a = (byte)5 / (byte)2;  // 2 (Int32)
    var b = (short)5 / (byte)2; // 2 (Int32)
    var c = 5 / 2;              // 2 (Int32)
    var d = 5 / 2U;             // 2 (UInt32)
    var e = 5L / 2U;            // 2 (Int64)
    var f = 5L / 2UL;           // 2 (UInt64)
    var g = 5F / 2UL;           // 2.5 (Single/float)
    var h = 5F / 2D;            // 2.5 (Double)
    var i = 5.0 / 2F;           // 2.5 (Double)
    var j = 5M / 2;             // 2.5 (Decimal)
    var k = 5M / 2F;            // Not allowed
    

    There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).

    0 讨论(0)
提交回复
热议问题