OK, so first things first:
This is a widening primitive type conversion, so this is legal. You can:
int foo() { return 'a' /* character constant */ };
long foo() { return 3; /* int constant */ }
But you CANNOT DO:
char foo() { return 1; /* int constant */ }
int foo() { return 1L; /* long constant */ }
Second: what it returns is NOT THE ASCII CODE AT ALL. Java does Unicode.
It just happens that when Java was created, Unicode only defined code points fitting 16 bits; hence char
was created as a 2 byte, unsigned primitive type (it is the only unsigned primitive type in Java) matching the then-called UCS-2 character coding (a 1 to 1 mapping between the coding and code points).
However, afterwards Unicode went "wide" and code points outside the BMP (ie, greater than U+FFFF) appeared; since then UCS-2 became UTF-16, and code points outside the BMP require two char
s for one code point (a leading surrogate and a trailing surrogate; in previous Unicode versions, and in the Java API, those were called resp. high and low surrogate). A char
is therefore now a UTF-16 code unit.
It is still true, however, that for code points in the BMP, the char
value exactly matches the code point.
Now, in order to "fix" your program to accurately display the "character value", ie the code point, for each possible entry, you would do that (Java 8):
public static void main(String[] args) {
final Scanner input = new Scanner(System.in);
System.out.println("Enter a character to get value of it:");
String inputString = input.next();
// Print -1 on an empty input
final OptionalInt codepoint = inputString.codePoints().findFirst();
System.out.println(codepoint.isPresent() ? codepoint.get() : -1);
}
This will also handle code points outside the BMP.