The Java language specification states that the escapes inside strings are the "normal" C ones like \n
and \t
, but they also specify octal escapes from \0
to \377
. Specifically, the JLS states:
OctalEscape:
\ OctalDigit
\ OctalDigit OctalDigit
\ ZeroToThree OctalDigit OctalDigit
OctalDigit: one of
0 1 2 3 4 5 6 7
ZeroToThree: one of
0 1 2 3
meaning that something like \4715
is illegal, despite it being within the range of a Java character (since Java characters are not bytes).
Why does Java have this arbitrary restriction? How are you meant to specify octal codes for characters beyond 255?
It is probably for purely historical reasons that Java supports octal escape sequences at all. These escape sequences originated in C (or maybe in C's predecessors B and BCPL), in the days when computers like the PDP-7 ruled the Earth, and much programming was done in assembly or directly in machine code, and octal was the preferred number base for writing instruction codes, and there was no Unicode, just ASCII, so three octal digits were sufficient to represent the entire character set.
By the time Unicode and Java came along, octal had pretty much given way to hexadecimal as the preferred number base when decimal just wouldn't do. So Java has its \u
escape sequence that takes hexadecimal digits. The octal escape sequence was probably supported just to make C programmers comfortable, and to make it easy to copy'n'paste string constants from C programs into Java programs.
Check out these links for historical trivia:
http://en.wikipedia.org/wiki/Octal#In_computers
http://en.wikipedia.org/wiki/PDP-11_architecture#Memory_management
The real answer to the question "Why" would require us to ask the Java language designers. We are not in a position to do that, and I doubt that they would even be in a position to answer. (Can you remember detailed technical discussions you had ~20 years ago?)
However, a plausible explanation for this "limitation" is that:
- octal escapes were borrowed from C / C++, in which they are also restricted to 8 bits,
- octal is old fashioned, and IT folks generally prefer and are more comfortable with hexadecimal, and
- Java supports ways of expressing Unicode, either by embedding it directly in the sourcecode, or by using
\u
Unicode escapes ... which are not limited to string and character literals.
And to be honest, I've never heard anyone (apart from you) argue that octal literals should be longer than 8 bits in Java.
Incidentally, when I started in computing character sets tended to be hardware specific, and were often less than 8 bits. In my undergraduate coursework, and my first job after graduating, I used CDC 6000 series machines that had 60 bit words and a 6 bit character set - "Display Code" I think we called it. Octal works very nicely in this context. But as the industry moved towards the (almost) universal adoption of 8/16/32/64 bit architectures, people increasingly used hexadecimal instead of octal.
If I can understand the rules (please correct me if I am wrong):
\ OctalDigit
Examples:
\0, \1, \2, \3, \4, \5, \6, \7
\ OctalDigit OctalDigit
Examples:
\00, \07, \17, \27, \37, \47, \57, \67, \77
\ ZeroToThree OctalDigit OctalDigit
Examples:
\000, \177, \277, \367,\377
\t
, \n
, \\
do not fall under OctalEscape rules; they must be under separate escape character rules.
Decimal 255 is equal to Octal 377 (use Windows Calculator in scientific mode to confirm)
Hence a three-digit Octal value falls in the range of \000
(0) to \377
(255)
Therefore, \4715
is not a valid octal value as it is more than three-octal-digits rule. If you want to access the code point character with decimal value 4715, use Unicode escape symbol \u
to represent the UTF-16 character \u126B
(4715 in decimal form) since every Java char
is in Unicode UTF-16.
from http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Character.html:
The char data type (and therefore the value that a Character object encapsulates) are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now U+0000 to U+10FFFF, known as Unicode scalar value. (Refer to the definition of the U+n notation in the Unicode standard.)
The set of characters from U+0000 to U+FFFF is sometimes referred to as the Basic Multilingual Plane (BMP). Characters whose code points are greater than U+FFFF are called supplementary characters. The Java 2 platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).
Edited:
Anything that beyond the valid octal value of 8-bit range (larger than one byte) is language-specific. Some programming languages may carry on to match Unicode implementation; some may not (limit it to one byte). Java definitely does not allow it even though it has Unicode support.
A few programming languages (vendor-dependent) that limit to one-byte octal literals:
- Java (all vendors): - An octal integer constant that begins with 0 or single-digit in base-8 (up to 0377); \0 to \7, \00 to \77, \000 to \377 (in octal string literal format)
- C/C++ (Microsoft) - An octal integer constant that begins with 0 (up to 0377); octal string literal format
\nnn
- Ruby - An octal integer constant that begins with 0 (up to 0377); octal string literal format
\nnn
A few programming languages (vendor-dependent) that support larger-than-one-byte octal literals:
- Perl - An octal integer constant that begins with 0; octal string literal format
\nnn
See http://search.cpan.org/~jesse/perl-5.12.1/pod/perlrebackslash.pod#Octal_escapes
A few programming languages do not support octal literals:
- C# - use
Convert.ToInt32(integer, 8)
for base-8 How can we convert binary number into its octal number using c#?
The \0-\377 octal escapes are also inherited from C, and the restriction makes a fair amount of sense in a language like C where characters == bytes (at least in the halcyon days before wchar_t).
I know of no reason why octal escapes are restricted to unicode codepoints 0 to 255. This might be for historical reasons. The question will basically remain unanswered as there was no technical reason not to increase the range of the octal escapes during the design of Java.
It should be noted however, that there's a not so obvious difference between the unicode escapes and the octal escapes. The octal escapes are processed only as part of strings while the unicode-escapes can occur anywhere in a file, for example as part of the name of a class. Also note, that the following example will not even compile:
String a = "\u000A";
The reason is, that \u000A is expanded to a newline at a very early stage (basically when loading the file). The following code does not generate an error:
String a = "\012";
The \012 is expanded after the compiler has parsed the code. This also holds for the other escapes like \n, \r, \t, etc.
So in conclusion: the unicode escapes are NOT a replacement for the octal escapes. They are a completely different concept. In particular, to avoid any problems (as with \u000A above), one should use octal escape for codepoints 0 to 255 and unicode escapes for codepoints above 255.
来源:https://stackoverflow.com/questions/9543026/why-do-java-octal-escapes-only-go-up-to-255