This is more of a language design rather than a programming question.
The following is an excerpt from JLS 15.19 Shift Operators:
If the promoted
C# and Java define shifting as using only the low-order bits of the shift count as that's what both sparc and x86 shift instructions do. Java was originally implemented by Sun on sparc processors, and C# by Microsoft on x86.
In contrast, C/C++ leave as undefined the behavior of shift instructions if the shift count is not in the range 0..31 (for a 32 bit int), allowing any behavior. That's because when C was first implemented, different handware handled these differently. For example, on a VAX, shifting by a negative amount shifts the other direction. So with C, the compiler can just use the hardware shift instruction and do whatever it does.
Java and C# are not fully "high-level". They try real hard to be such that they can be compiled into efficient code, in order to shine in micro-benchmarks. This is why they have the "value types" such as int
instead of having, as default integer type, true integers, which would be objects in their own right, and not limited to a fixed range.
Hence, they mimic what the hardware does. They trim it a bit, in that they mandate masking, whereas C only allows it. Still, Java and C# are "medium-level" languages.
Because in most programming environments an integer is only 32 bits. So then 5 bits (which is enough to express 32 values) is already enough to shift the entire integer. A similar reasoning exists for a 64bit long: 6 bits is all you need to completely shift the entire value.
I can understand part of the confusion: if your right-hand operand is the result of a calculation that ends up with a value greater than 32, you might expect it to just shift all the bits rather than apply a mask.