In different encodings of Unicode, for example UTF-16le or UTF-8, a character may occupy 2 or 3 bytes. Many Unicode applications doesn\'t t
You are confusing code points, graphemes and encoding.
The encoding is how code points are converted into an octet stream for storage, transmission or processing. Both UTF-8 and UTF-16 are variable width encodings, with different code points needing a different number of octets (for UTF-8 anything from 1 to, IIRC, 6 and UTF-16 either 2 or 4).
Graphemes are "what we see as a character", these are what are displayed. One code point (e.g. LATIN LOWER CASE A) for one grapheme, but in other cases multiple code points might be needed (e.g. LATIN LOWER CASE A, COMBINING ACUTE and COMBINING UNDERSCORE to get an lower case with acute and underscore as used in Kwakwala). In some cases there is more than one combination of code points to create the same grapheme (e.g. LATIN LOWER CASE A WITH ACUTE and COMBINING UNDERSCORE), this is "normalisation",
I.e. the length of the encoding of a single grapheme will depend on the encoding and normalisation.
The display width of the grapheme will depend on the typeface, style and size independently of the encoding length.
For more information, see Wikipedia on Unicode and Unicode's home. There are also some excellent books, perhaps most notably "Fonts & Encodings" by Yannis Haralambous, O'Reilly.
The Unicode property reflecting this concept is East_Asian_Width. It's not really reliable as a visual width in the context of general Unicode rendering, as non-Asian characters, combining characters etc. will fail to line up even in a monospaced font. (Your example certainly doesn't render lined-up for me.)
Java does not have the built-in ability to read this property for characters (though Android's extension does). You can get it from ICU4J if you really need it.
Regarding "Or any Java library function to calculate the display width?": if there is one I've never found it.
The simplest method of calulating the width of a character / string is to write it in the GNU unicode font ( http://unifoundry.com/unifont.html ) & measure the character width. Not clean, but so far it's worked for every encoding I can think of.
FWIW here's what I do:
java.awt.font.Font MONOSPACEFONT = Font.createFont(Font.TRUETYPE_FONT,
new File("unifont-5.1.20080907.ttf"));
java.awt.font.FontRenderContext FRC = new FontRenderContext(null, true, true);
int charWidth = (int) (2.0*((java.awt.geom.Rectangle2D.Float)
MONOSPACEFONT.getStringBounds(stringToMeasure, FRC)).width);
... this should work pretty much anywhere you deploy your JVM (it runs fine in a headless environment).
I believe that to do this correctly, you need to consider that component of the published Unicode Standard known as Unicode Standard Annex #14, the Unicode Line Breaking Algorithm.
If you were programming in Perl, what you want to know would be super easy, because Perl’s Unicode::LineBreak module implementing UAX#14 includes a class with a simple columns
method that tells you the right answer for its string argument. These things work especially well on Asian languages, where absolutley nothing else will do. This module includes over 6,000 unit tests, is actively maintained, and its author is himself Asian, so it’s important to him to get these tricky bits exactly correct.
Most of the guts of the module are a library written in C. I have not looked at how to call its component C library from other languages thn Perl, but you might look into whether this might be possible.
Sounds like you're looking for something like wcwidth and wcswidth, defined in IEEE Std 1003.1-2001, but removed from ISO C:
The
wcwidth()
function shall determine the number of column positions required for the wide character wc. Thewcwidth()
function shall either return 0 (if wc is a null wide-character code), or return the number of column positions to be occupied by the wide-character code wc, or return -1 (if wc does not correspond to a printable wide-character code).
Markus Kuhn wrote an open source version, wcwidth.c, based on Unicode 5.0. It includes a description of the problem, and an acknowledgement of the lack of standards in the area:
In fixed-width output devices, Latin characters all occupy a single "cell" position of equal width, whereas ideographic CJK characters occupy two such cells. Interoperability between terminal-line applications and (teletype-style) character terminals using the UTF-8 encoding requires agreement on which character should advance the cursor by how many cell positions. No established formal standards exist at present on which Unicode character shall occupy how many cell positions on character terminals. These routines are a first attempt of defining such behavior based on simple rules applied to data provided by the Unicode Consortium. [...]
It implements the following rules: