Java: Readers and Encodings

后端 未结 5 443
臣服心动
臣服心动 2020-12-05 06:04

Java\'s default encoding is ASCII. Yes? (See my edit below)

When a textfile is encoded in UTF-8? How does a Reader know that he has to use

相关标签:
5条回答
  • 2020-12-05 06:19

    You can start getting the idea here java Charset API

    Note that according to the doc,

    The native character encoding of the Java programming language is UTF-16

    EDIT :

    sorry I got called away before I could finish this, maybe I shouldn't have posted the partial answer as it was. Anyway, the other answers explain the details, the point being that the native file charset for each platform together with common alternate charsets will be read correctly by java.

    0 讨论(0)
  • 2020-12-05 06:28

    I'd like to approach this part first:

    Java's default encoding is ASCII. Yes?

    There are at least 4 different things in the Java environment that can arguably be called "default encoding":

    1. the "default charset" is what Java uses to convert bytes to characters (and byte[] to String) at Runtime, when nothing else is specified. This one depends on the platform, settings, command line arguments, ... and is usually just the platform default encoding.
    2. the internal character encoding that Java uses in char values and String objects. This one is always UTF-16! There is no way to change it, it just is UTF-16! This means that a char representing a always has the numeric value 97 and a char representing π always has the numeric value 960.
    3. the character encoding that Java uses to store String constants in .class files. This one is always UTF-8. There is no way to change it.
    4. the charset that the Java compiler uses to interpret Java source code in .java files. This one defaults to the default charset, but can be configured at compile time.

    How does a Reader know that he has to use UTF-8?

    It doesn't. If you have some plain text file, then you must know the encoding to read it correctly. If you're lucky you can guess (for example, you can try the platform default encoding), but that's an error-prone process and in many cases you wouldn't even have a way to realize that you guessed wrong. This is not specific to Java. It's true for all systems.

    Some formats such as XML and all XML-based formats were designed with this restriction in mind and include a way to specify the encoding in the data, so that guessing is no longer necessary.

    Read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) for the details.

    0 讨论(0)
  • 2020-12-05 06:29

    For most reader, Java uses whatever encoding & character set your platform does -- this may be some flavor of ASCII or UTF-8, or something more exotic like JIS (in Japan). Characters in this set are then converted to the UTF-16 which Java uses internally.

    There's a work-around if the platform encoding is different than a file encoding (my problem -- UTF-8 files are standard, but my platform uses Windows-1252 encoding). Create an InputStreamReader instance that uses the constructor specifying encoding.

    Edit: do this like so:

    InputStreamReader myReader = new InputStreamReader(new FileInputStream(myFile),"UTF-8");
    //read data
    myReader.close();
    

    However, IIRC there are some provisions to autodetect common encodings (such as UTF-8 and UTF-16). UTF-16 can be detected by the Byte Order Mark at the beginning. UTF-8 follows certain rules too, but generally the difference b/w your platform encoding and UTF-8 isn't going to matter unless you're using international characters in place of Latin ones.

    0 讨论(0)
  • 2020-12-05 06:32

    How does a Reader know that he have to use UTF-8?

    You normally specify that yourself in an InputStreamReader. It has a constructor taking the character encoding. E.g.

    Reader reader = new InputStreamReader(new FileInputStream("c:/foo.txt"), "UTF-8");
    

    All other readers (as far as I know) uses the platform default character encoding, which may indeed not per-se be the correct encoding (such as -cough- CP-1252).

    You can in theory also detect the character encoding automatically based on the byte order mark. This distinguishes the several unicode encodings from other encodings. Java SE unfortunately doesn't have any API for this, but you can homebrew one which can be used to replace InputStreamReader as in the example here above:

    public class UnicodeReader extends Reader {
        private static final int BOM_SIZE = 4;
        private final InputStreamReader reader;
    
        /**
         * Construct UnicodeReader
         * @param in Input stream.
         * @param defaultEncoding Default encoding to be used if BOM is not found,
         * or <code>null</code> to use system default encoding.
         * @throws IOException If an I/O error occurs.
         */
        public UnicodeReader(InputStream in, String defaultEncoding) throws IOException {
            byte bom[] = new byte[BOM_SIZE];
            String encoding;
            int unread;
            PushbackInputStream pushbackStream = new PushbackInputStream(in, BOM_SIZE);
            int n = pushbackStream.read(bom, 0, bom.length);
    
            // Read ahead four bytes and check for BOM marks.
            if ((bom[0] == (byte) 0xEF) && (bom[1] == (byte) 0xBB) && (bom[2] == (byte) 0xBF)) {
                encoding = "UTF-8";
                unread = n - 3;
            } else if ((bom[0] == (byte) 0xFE) && (bom[1] == (byte) 0xFF)) {
                encoding = "UTF-16BE";
                unread = n - 2;
            } else if ((bom[0] == (byte) 0xFF) && (bom[1] == (byte) 0xFE)) {
                encoding = "UTF-16LE";
                unread = n - 2;
            } else if ((bom[0] == (byte) 0x00) && (bom[1] == (byte) 0x00) && (bom[2] == (byte) 0xFE) && (bom[3] == (byte) 0xFF)) {
                encoding = "UTF-32BE";
                unread = n - 4;
            } else if ((bom[0] == (byte) 0xFF) && (bom[1] == (byte) 0xFE) && (bom[2] == (byte) 0x00) && (bom[3] == (byte) 0x00)) {
                encoding = "UTF-32LE";
                unread = n - 4;
            } else {
                encoding = defaultEncoding;
                unread = n;
            }
    
            // Unread bytes if necessary and skip BOM marks.
            if (unread > 0) {
                pushbackStream.unread(bom, (n - unread), unread);
            } else if (unread < -1) {
                pushbackStream.unread(bom, 0, 0);
            }
    
            // Use given encoding.
            if (encoding == null) {
                reader = new InputStreamReader(pushbackStream);
            } else {
                reader = new InputStreamReader(pushbackStream, encoding);
            }
        }
    
        public String getEncoding() {
            return reader.getEncoding();
        }
    
        public int read(char[] cbuf, int off, int len) throws IOException {
            return reader.read(cbuf, off, len);
        }
    
        public void close() throws IOException {
            reader.close();
        }
    }
    

    Edit as a reply on your edit:

    So the encoding is depends on the OS. So that means that not on every OS this is true:

    'a'== 97
    

    No, this is not true. The ASCII encoding (which contains 128 characters, 0x00 until with 0x7F) is the basis of all other character encodings. Only the characters outside the ASCII charset may risk to be displayed differently in another encoding. The ISO-8859 encodings covers the characters in the ASCII range with the same codepoints. The Unicode encodings covers the characters in the ISO-8859-1 range with the same codepoints.

    You may find each of those blogs an interesting read:

    1. The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) (more theoretical of the two)
    2. Unicode - How to get the characters right? (more practical of the two)
    0 讨论(0)
  • 2020-12-05 06:35

    Java's default encoding depends on your OS. For Windows, it's normally "windows-1252", for Unix it's typically "ISO-8859-1" or "UTF-8".

    A reader knows the correct encoding because you tell it the correct encoding. Unfortunately, not all readers let you do this (for example, FileReader doesn't), so often you have to use an InputStreamReader.

    0 讨论(0)
提交回复
热议问题