Hello
I retrieve text based utf8 data from a foreign source which contains special chars such as u"ıöüç"
while I want to normalize them to English such as "ıöüç"
-> "iouc"
. What would be the best way to achieve this ?
I recommend using Unidecode module:
>>> from unidecode import unidecode
>>> unidecode(u'ıöüç')
'iouc'
Note how you feed it a unicode string and it outputs a byte string. The output is guaranteed to be ASCII.
It all depends on how far you want to go in transliterating the result. If you want to convert everything all the way to ASCII (αβγ
to abg
) then unidecode
is the way to go.
If you just want to remove accents from accented letters, then you could try decomposing your string using normalization form NFKD (this converts the accented letter á
to a plain letter a
followed by U+0301 COMBINING ACUTE ACCENT
) and then discarding the accents (which belong to the Unicode character class Mn
— "Mark, nonspacing").
import unicodedata
def remove_nonspacing_marks(s):
"Decompose the unicode string s and remove non-spacing marks."
return ''.join(c for c in unicodedata.normalize('NFKD', s)
if unicodedata.category(c) != 'Mn')
The simplest way I found:
unicodedata.normalize('NFKD', s).encode("ascii", "ignore")
import unicodedata
unicodedata.normalize()
来源:https://stackoverflow.com/questions/4162603/python-and-character-normalization