Looking at this C# code:
byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type \'int\' to \'byte\'
The result of
The answers indicating some inefficiency adding bytes and truncating the result back to a byte are incorrect. x86 processors have instructions specifically designed for integer operation on 8-bit quantities.
In fact, for x86/64 processors, performing 32-bit or 16-bit operations are less efficient than 64-bit or 8-bit operations due to the operand prefix byte that has to be decoded. On 32-bit machines, performing 16-bit operations entail the same penalty, but there are still dedicated opcodes for 8-bit operations.
Many RISC architectures have similar native word/byte efficient instructions. Those that don't generally have a store-and-convert-to-signed-value-of-some-bit-length.
In other words, this decision must have been based on perception of what the byte type is for, not due to underlying inefficiencies of hardware.
From the C# language spec 1.6.7.5 7.2.6.2 Binary numeric promotions it converts both operands to int if it can't fit it into several other categories. My guess is they didn't overload the + operator to take byte as a parameter but want it to act somewhat normally so they just use the int data type.
C# language Spec
My suspicion is that C# is actually calling the operator+
defined on int
(which returns an int
unless you are in a checked
block), and implicitly casting both of your bytes
/shorts
to ints
. That's why the behavior appears inconsistent.
I think it's a design decission about which operation was more common... If byte+byte = byte maybe much more people will be bothered by having to cast to int when an int is required as result.