In Java, if we divide byte
s, short
s or int
s, we always get an int
. If one of the operands is long
, we\'ll ge
Whenever you're defining a variable of type byte, whatever you type across from it should be of type byte.
That means it can only be a number in the range of a byte (-128 to 127).
However, when you type in an expression, for ex. byteA/byteB that is not the same as typing in a literal, for ex. the number 127.
This is the nature of Java - the integer is the default data type used for whole numbers.
By default when you're making an assignment of an expression, Java converts that into the default data type (integer) despite the fact that the result of the expression might be a valid value for a byte.
So what happens is that when you define a byte and assign an expression as the value of that byte, Java will need to convert it into an integer:
int byteAByteB = byteA / byteB;
However, you can get around that by casting the assigned expression and thus forsing Java to treat it as a byte.
byte byteAByteB = (byte) (byteA / byteB);
This way you're telling Java to treat that as a byte. (Can also be done with short, etc.)
The main reason is that machines usually have only add instructions for their native integer type (and floats). This is why for many languages the least used type in an arithmetic expression is int
(usually the type that correspond in some way to the basic machine integer type).
For example, i386 spec says:
ADD performs an integer addition of the two operands (DEST and SRC). The result of the addition is assigned to the first operand (DEST), and the flags are set accordingly. When an immediate byte is added to a word or doubleword operand, the immediate value is sign-extended to the size of the word or doubleword operand.
This means that internally any byte value is extended to an integer (or similar). After all this is reasonable as the processor is 32/64 bits and then perform any arithmetic in these sizes. If it could be possible to make arithmetic in bytes this is generally not considered as useful.
The JVM specs says that (for addition) you have : iadd
, ladd
, fadd
, dadd
. This just reflect the fact that underlying machines usually behave such. Any other choice could have been possible, probably at the price of performance degradation.
byteA divided by byteB can't be anything but a byte, can it?
It can be other than byte :
byteA = -128;
byteB = -1;
int div = byteA/byteB; // == 128, not a byte
I believe the justification is that a simple rules creates least surprise. The result is always the wider type of the two (minimum being int
), it doesn't depend on the operation.
A better approach might be to always widen +
, *
, -
unless explicitly (or perhaps implicitly) narrowed. i.e. don't do overflow or underflow unless you use a cast. /
for example could always be a double
or long
operation unless casted.
But C and thus Java doesn't do this.
In short, it has one simple rule for dealing with this, for better or worse.
See my rant here http://vanillajava.blogspot.co.uk/2015/02/inconsistent-operation-widen-rules-in.html
I guess it's something that was adopted from C, possibly via C++.
In those languages, an argument or arguments are always promoted to int
if they are narrower types than an int
. This happens before the expression is evaluated. Quite often it goes unnoticed since the resulting operation is converted to the type to which it is being assigned, and a compiler may well optimise out all the intermediate steps if there are no side effects.
In Java though it's not too pernicious. (In C and C++ it can catch you out: a multiplication of two large unsigned short
s can overflow the int
, the behaviour of which is undefined.)
Note that if one of the arguments is larger than an int
, then the type of the expression is the largest of the arguments types.