问题
I recently did some profiling on some code and found that the largest CPU usage was being consumed by calls to BitConverter such as:
return BitConverter.ToInt16(new byte[] { byte1, byte2 });
when switching to something like:
return (short)(byte1 << 8 | byte2);
I noticed a huge improvement in performance.
My question is why is using BitConverter so much slower? I would have assumed that BitConverter was essentially doing the same kind of bit shifting internally.
回答1:
The call to BitConverter
involves the allocation and initialisation of a new object. And then a method call. And inside the method call is parameter validation.
The bitwise operations can be compiled right down to a handful of CPU opcodes to do a shift followed by the or.
The latter will surely be faster because it removes all of the overhead of the former.
回答2:
You can look at the reference source and see that it has a few extra things to worry about, notably parameter validation and endianness worries:
public static unsafe short ToInt16(byte[] value, int startIndex) {
if( value == null) {
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.value);
}
if ((uint) startIndex >= value.Length) {
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.startIndex, ExceptionResource.ArgumentOutOfRange_Index);
}
if (startIndex > value.Length -2) {
ThrowHelper.ThrowArgumentException(ExceptionResource.Arg_ArrayPlusOffTooSmall);
}
Contract.EndContractBlock();
fixed( byte * pbyte = &value[startIndex]) {
if( startIndex % 2 == 0) { // data is aligned
return *((short *) pbyte);
}
else {
if( IsLittleEndian) {
return (short)((*pbyte) | (*(pbyte + 1) << 8)) ;
}
else {
return (short)((*pbyte << 8) | (*(pbyte + 1)));
}
}
}
}
来源:https://stackoverflow.com/questions/22355107/why-is-bitconverter-slower-than-doing-the-bitwise-operations-directly