byte + byte = int… why?

前端 未结 16 2087
长情又很酷
长情又很酷 2020-11-22 04:33

Looking at this C# code:

byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type \'int\' to \'byte\'

The result of

相关标签:
16条回答
  • 2020-11-22 05:25

    I thought I had seen this somewhere before. From this article, The Old New Thing:

    Suppose we lived in a fantasy world where operations on 'byte' resulted in 'byte'.

    byte b = 32;
    byte c = 240;
    int i = b + c; // what is i?
    

    In this fantasy world, the value of i would be 16! Why? Because the two operands to the + operator are both bytes, so the sum "b+c" is computed as a byte, which results in 16 due to integer overflow. (And, as I noted earlier, integer overflow is the new security attack vector.)

    EDIT: Raymond is defending, essentially, the approach C and C++ took originally. In the comments, he defends the fact that C# takes the same approach, on the grounds of language backward compatibility.

    0 讨论(0)
  • 2020-11-22 05:26

    In terms of "why it happens at all" it's because there aren't any operators defined by C# for arithmetic with byte, sbyte, short or ushort, just as others have said. This answer is about why those operators aren't defined.

    I believe it's basically for the sake of performance. Processors have native operations to do arithmetic with 32 bits very quickly. Doing the conversion back from the result to a byte automatically could be done, but would result in performance penalties in the case where you don't actually want that behaviour.

    I think this is mentioned in one of the annotated C# standards. Looking...

    EDIT: Annoyingly, I've now looked through the annotated ECMA C# 2 spec, the annotated MS C# 3 spec and the annotation CLI spec, and none of them mention this as far as I can see. I'm sure I've seen the reason given above, but I'm blowed if I know where. Apologies, reference fans :(

    0 讨论(0)
  • 2020-11-22 05:28

    This was probably a practical decision on the part of the language designers. After all, an int is an Int32, a 32-bit signed integer. Whenever you do an integer operation on a type smaller than int, it's going to be converted to a 32 bit signed int by most any 32 bit CPU anyway. That, combined with the likelihood of overflowing small integers, probably sealed the deal. It saves you from the chore of continuously checking for over/under-flow, and when the final result of an expression on bytes would be in range, despite the fact that at some intermediate stage it would be out of range, you get a correct result.

    Another thought: The over/under-flow on these types would have to be simulated, since it wouldn't occur naturally on the most likely target CPUs. Why bother?

    0 讨论(0)
  • 2020-11-22 05:31

    The third line of your code snippet:

    byte z = x + y;
    

    actually means

    byte z = (int) x + (int) y;
    

    So, there is no + operation on bytes, bytes are first cast to integers and the result of addition of two integers is a (32-bit) integer.

    0 讨论(0)
  • 2020-11-22 05:31

    Addition is not defined for bytes. So they are cast to int for the addition. This true for most math operations and bytes. (note this is how it used to be in older languages, I am assuming that it hold true today).

    0 讨论(0)
  • 2020-11-22 05:32

    C#

    ECMA-334 states that addition is only defined as legal on int+int, uint+uint, long+long and ulong+ulong (ECMA-334 14.7.4). As such, these are the candidate operations to be considered with respect to 14.4.2. Because there are implicit casts from byte to int, uint, long and ulong, all the addition function members are applicable function members under 14.4.2.1. We have to find the best implicit cast by the rules in 14.4.2.3:

    Casting(C1) to int(T1) is better than casting(C2) to uint(T2) or ulong(T2) because:

    • If T1 is int and T2 is uint, or ulong, C1 is the better conversion.

    Casting(C1) to int(T1) is better than casting(C2) to long(T2) because there is an implicit cast from int to long:

    • If an implicit conversion from T1 to T2 exists, and no implicit conversion from T2 to T1 exists, C1 is the better conversion.

    Hence the int+int function is used, which returns an int.

    Which is all a very long way to say that it's buried very deep in the C# specification.

    CLI

    The CLI operates only on 6 types (int32, native int, int64, F, O, and &). (ECMA-335 partition 3 section 1.5)

    Byte (int8) is not one of those types, and is automatically coerced to an int32 before the addition. (ECMA-335 partition 3 section 1.6)

    0 讨论(0)
提交回复
热议问题