byte + byte = int… why?

前端 未结 16 2052
长情又很酷
长情又很酷 2020-11-22 04:33

Looking at this C# code:

byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type \'int\' to \'byte\'

The result of

相关标签:
16条回答
  • 2020-11-22 05:08

    I remember once reading something from Jon Skeet (can't find it now, I'll keep looking) about how byte doesn't actually overload the + operator. In fact, when adding two bytes like in your sample, each byte is actually being implicitly converted to an int. The result of that is obviously an int. Now as to WHY this was designed this way, I'll wait for Jon Skeet himself to post :)

    EDIT: Found it! Great info about this very topic here.

    0 讨论(0)
  • 2020-11-22 05:11

    This is because of overflow and carries.

    If you add two 8 bit numbers, they might overflow into the 9th bit.

    Example:

      1111 1111
    + 0000 0001
    -----------
    1 0000 0000
    

    I don't know for sure, but I assume that ints, longs, anddoubles are given more space because they are pretty large as it is. Also, they are multiples of 4, which are more efficient for computers to handle, due to the width of the internal data bus being 4 bytes or 32 bits (64 bits is getting more prevalent now) wide. Byte and short are a little more inefficient, but they can save space.

    0 讨论(0)
  • 2020-11-22 05:12

    I've test performance between byte and int.
    With int values :

    class Program
    {
        private int a,b,c,d,e,f;
    
        public Program()
        {
            a = 1;
            b = 2;
            c = (a + b);
            d = (a - b);
            e = (b / a);
            f = (c * b);
        }
    
        static void Main(string[] args)
        {
            int max = 10000000;
            DateTime start = DateTime.Now;
            Program[] tab = new Program[max];
    
            for (int i = 0; i < max; i++)
            {
                tab[i] = new Program();
            }
            DateTime stop = DateTime.Now;
    
            Debug.WriteLine(stop.Subtract(start).TotalSeconds);
        }
    }
    

    With byte values :

    class Program
    {
        private byte a,b,c,d,e,f;
    
        public Program()
        {
            a = 1;
            b = 2;
            c = (byte)(a + b);
            d = (byte)(a - b);
            e = (byte)(b / a);
            f = (byte)(c * b);
        }
    
        static void Main(string[] args)
        {
            int max = 10000000;
            DateTime start = DateTime.Now;
            Program[] tab = new Program[max];
    
            for (int i = 0; i < max; i++)
            {
                tab[i] = new Program();
            }
            DateTime stop = DateTime.Now;
    
            Debug.WriteLine(stop.Subtract(start).TotalSeconds);
        }
    }
    

    Here the result:
    byte : 3.57s 157mo, 3.71s 171mo, 3.74s 168mo with CPU ~= 30%
    int : 4.05s 298mo, 3.92s 278mo, 4.28 294mo with CPU ~= 27%
    Conclusion :
    byte use more the CPU but it cost les memory and it's faster (maybe because there are less byte to alloc)

    0 讨论(0)
  • 2020-11-22 05:18

    From .NET Framework code:

    // bytes
    private static object AddByte(byte Left, byte Right)
    {
        short num = (short) (Left + Right);
        if (num > 0xff)
        {
            return num;
        }
        return (byte) num;
    }
    
    // shorts (int16)
    private static object AddInt16(short Left, short Right)
    {
        int num = Left + Right;
        if ((num <= 0x7fff) && (num >= -32768))
        {
            return (short) num;
        }
        return num;
    }
    

    Simplify with .NET 3.5 and above:

    public static class Extensions 
    {
        public static byte Add(this byte a, byte b)
        {
            return (byte)(a + b);
        }
    }
    

    now you can do:

    byte a = 1, b = 2, c;
    c = a.Add(b);
    

    0 讨论(0)
  • 2020-11-22 05:22

    In addition to all the other great comments, I thought I would add one little tidbit. A lot of comments have wondered why int, long, and pretty much any other numeric type doesn't also follow this rule...return a "bigger" type in response to arithmatic.

    A lot of answers have had to do with performance (well, 32bits is faster than 8bits). In reality, an 8bit number is still a 32bit number to a 32bit CPU....even if you add two bytes, the chunk of data the cpu operates on is going to be 32bits regardless...so adding ints is not going to be any "faster" than adding two bytes...its all the same to the cpu. NOW, adding two ints WILL be faster than adding two longs on a 32bit processor, because adding two longs requires more microops since you're working with numbers wider than the processors word.

    I think the fundamental reason for causing byte arithmetic to result in ints is pretty clear and straight forward: 8bits just doesn't go very far! :D With 8 bits, you have an unsigned range of 0-255. That's not a whole lot of room to work with...the likelyhood that you are going to run into a bytes limitations is VERY high when using them in arithmetic. However, the chance that you're going to run out of bits when working with ints, or longs, or doubles, etc. is significantly lower...low enough that we very rarely encounter the need for more.

    Automatic conversion from byte to int is logical because the scale of a byte is so small. Automatic conversion from int to long, float to double, etc. is not logical because those numbers have significant scale.

    0 讨论(0)
  • 2020-11-22 05:24

    This is for the most part my answer that pertains to this topic, submitted first to a similar question here.

    All operations with integral numbers smaller than Int32 are rounded up to 32 bits before calculation by default. The reason why the result is Int32 is simply to leave it as it is after calculation. If you check the MSIL arithmetic opcodes, the only integral numeric type they operate with are Int32 and Int64. It's "by design".

    If you desire the result back in Int16 format, it is irrelevant if you perform the cast in code, or the compiler (hypotetically) emits the conversion "under the hood".

    For example, to do Int16 arithmetic:

    short a = 2, b = 3;
    
    short c = (short) (a + b);
    

    The two numbers would expand to 32 bits, get added, then truncated back to 16 bits, which is how MS intended it to be.

    The advantage of using short (or byte) is primarily storage in cases where you have massive amounts of data (graphical data, streaming, etc.)

    0 讨论(0)
提交回复
热议问题