In pre-.NET world I always assumed that int is faster than byte since this is how processor works.
Now it\'s matter habit of using int even when bytes could work, fo
Are you talking about storage space or operations on a byte? If it's storage space, yes it takes up less space than an int (1 byte vs 4 bytes).
In terms of arithmetic operations on a byte, I don't have raw numbers and really only a profiler can give them to you. However you should consider that arithmetic operations are not done on raw byte instances. Instead they are promoted to int's and then the operation is done on an int. This is why you have to explicitly cast operations like the following
byte b1 = 4;
byte b2 = 6;
byte b3 = b1 + b2; // Does not compile because the type is int
So in the general case I think it's safe to say that arithmetic operations on an int are faster than that of a byte. Simply because in the byte case you pay the (probably very small) cost of type promotion.
Your pre-.NET assumption were faulty -- there have always been plenty of computer systems around that, while nominally "byte-addressable", would have to set a single byte by reading a full word, masking it out to alter the one byte of it, write it all down -- slower than just setting a full word. It depends on the internals of the way the processor and memory are connected, not on the programmer-visible architecture.
Whether in .NET or native code, focus first on using the data as semantically correct for your application, not on trying to double guess the computer system's architect -- "Premature optimization is the root of all evil in programming", to quote Knuth quoting Hoare.
Same as any other platform. Why would .NET change this? The code still has to run on the same CPU, which has the same performance characteristics as always.
And that means you should still use int
by default.
OK, I just opened disassembly window. There is nothing there but regular "mov byte"
So, .NET/CLR does not add anything to this. And all arithmetic operations done against int values, so no difference between bytes and int there.
Unless you are finished with your design and need to find clever ways to optimize, then just use what you need.
If you need a counter or are doing basic math, it is probably what you want, if you're working with binary data, go with a byte.
In the end, each type should be optimized for it's intended purposes, so you're better off to spend your time on design instead of optimization.