This works fine:
int foo = bar.charAt(1) - \'0\';
Yet this doesn\'t - because bar.charAt(x) returns a char:
int foo = bar.c
Your code may compile without error & run without throwing an exception, but converting between char's & int's is bad practice. First, it makes the code confusing, leading to maintenance headaches down the road. Second, clever "tricks" can prevent compilers from optimizing the byte code. One of the best ways to get fast code is to write dumb code (i.e., not clever code).
That's a clever trick. char's are actually of the same type / length as shorts. Now when you have a char that represents a ASCII/unicode digit (like '1'), and you subtract the smallest possible ASCII/unicode digit from it (e.g. '0'), then you'll be left with the digit's corresponding value (hence, 1)
Because char is the same as short (although, an unsigned short), you can safely cast it to an int. And the casting is always done automatically if arithmetics are involved
I will echo what @Mark Peters has said above in case people overlook his comment.
As I quote: " Don't make the mistake of thinking that '0' == 0. In reality, '0' == 48 "