What are all the Japanese whitespace characters?

帅比萌擦擦* 提交于 2019-11-28 07:13:29

问题


I need to split a string and extract words separated by whitespace characters.The source may be in English or Japanese. English whitespace characters include tab and space, and Japanese text uses these too. (IIRC, all widely-used Japanese character sets are supersets of US-ASCII.)

So the set of characters I need to use to split my string includes normal ASCII space and tab.

But, in Japanese, there is another space character, commonly called a 'full-width space'. According to my Mac's Character Viewer utility, this is U+3000 "IDEOGRAPHIC SPACE". This is (usually) what results when a user presses the space bar while typing in Japanese input mode.

Are there any other characters that I need to consider?

I am processing textual data submitted by users who have been told to "separate entries with spaces". However, the users are using a wide variety of computer and mobile phone operating systems to submit these texts. We've already seen that users may not be aware of whether they are in Japanese or English input mode when entering this data.

Furthermore, the behavior of the space key differs across platforms and applications even in Japanese mode (e.g., Windows 7 will insert an ideographic space but iOS will insert an ASCII space).

So what I want is basically "the set of all characters that visually look like a space and might be generated when the user presses the space key, or the tab key since many users do not know the difference between a space and a tab, in Japanese and/or English".

Is there any authoritative answer to such a question?


回答1:


You need the ASCII tab, space and non-breaking space (U+00A0), and the full-width space, which you've correctly identified as U+3000. You might possibly want newlines and vertical space characters. If your input is in unicode (not Shift-JIS, etc.) then that's all you'll need. There are other (control) characters such as \0 NULL which are sometimes used as information delimiters, but they won't be rendered as a space in East Asian text - i.e., they won't appear as white-space.

edit: Matt Ball has a good point in his comment, but, as his example illustrates, many regex implementations don't deal well with full-width East Asian punctuation. In this connection, it's worth mentioning that Python's string.whitespace won't cut the mustard either.




回答2:


I just found your posting. This is a great explantion about normalizing Unicode characters.

http://en.wikipedia.org/wiki/Unicode_equivalence

I found that many programming languages, like Python, have modules that can implement these normalization rules the Unicode standards. For my purposes, I found the following python code works very well. It converts all unicode variants of whitespace to the ascii range. After the normalization, a regex command can convert all white space to ascii \x32:

import unicodedata
# import re

ucode = u'大変、 よろしくお願い申し上げます。'

normalized = unicodedata.normalize('NFKC', ucode)

# old code
# utf8text = re.sub('\s+', ' ', normalized).encode('utf-8')

# new code
utf8text = ' '.join(normalized.encode('utf-8').split())

Since the first writing, I learned Python's regex (re) module improperly itentifies these whitespace characters and can cause a crash if encountered. It turns out a faster, more reliable method to uses the .split() function.



来源:https://stackoverflow.com/questions/4300980/what-are-all-the-japanese-whitespace-characters

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!