I\'m having problems dealing with unicode characters from text fetched from different web pages (on different sites). I am using BeautifulSoup.
The problem is that
In shell:
Find supported UTF-8 locale by the following command:
locale -a | grep "UTF-8"
Export it, before running the script, e.g.:
export LC_ALL=$(locale -a | grep UTF-8)
or manually like:
export LC_ALL=C.UTF-8
Test it by printing special character, e.g. ™
:
python -c 'print(u"\u2122");'
Above tested in Ubuntu.
I just used the following:
import unicodedata
message = unicodedata.normalize("NFKD", message)
Check what documentation says about it:
unicodedata.normalize(form, unistr) Return the normal form form for the Unicode string unistr. Valid values for form are ‘NFC’, ‘NFKC’, ‘NFD’, and ‘NFKD’.
The Unicode standard defines various normalization forms of a Unicode string, based on the definition of canonical equivalence and compatibility equivalence. In Unicode, several characters can be expressed in various way. For example, the character U+00C7 (LATIN CAPITAL LETTER C WITH CEDILLA) can also be expressed as the sequence U+0043 (LATIN CAPITAL LETTER C) U+0327 (COMBINING CEDILLA).
For each character, there are two normal forms: normal form C and normal form D. Normal form D (NFD) is also known as canonical decomposition, and translates each character into its decomposed form. Normal form C (NFC) first applies a canonical decomposition, then composes pre-combined characters again.
In addition to these two forms, there are two additional normal forms based on compatibility equivalence. In Unicode, certain characters are supported which normally would be unified with other characters. For example, U+2160 (ROMAN NUMERAL ONE) is really the same thing as U+0049 (LATIN CAPITAL LETTER I). However, it is supported in Unicode for compatibility with existing character sets (e.g. gb2312).
The normal form KD (NFKD) will apply the compatibility decomposition, i.e. replace all compatibility characters with their equivalents. The normal form KC (NFKC) first applies the compatibility decomposition, followed by the canonical composition.
Even if two unicode strings are normalized and look the same to a human reader, if one has combining characters and the other doesn’t, they may not compare equal.
Solves it for me. Simple and easy.
We struck this error when running manage.py migrate
in Django with localized fixtures.
Our source contained the # -*- coding: utf-8 -*-
declaration, MySQL was correctly configured for utf8 and Ubuntu had the appropriate language pack and values in /etc/default/locale
.
The issue was simply that the Django container (we use docker) was missing the LANG
env var.
Setting LANG
to en_US.UTF-8
and restarting the container before re-running migrations fixed the problem.
I had this issue trying to output Unicode characters to stdout
, but with sys.stdout.write
, rather than print (so that I could support output to a different file as well).
From BeautifulSoup's own documentation, I solved this with the codecs library:
import sys
import codecs
def main(fIn, fOut):
soup = BeautifulSoup(fIn)
# Do processing, with data including non-ASCII characters
fOut.write(unicode(soup))
if __name__ == '__main__':
with (sys.stdin) as fIn: # Don't think we need codecs.getreader here
with codecs.getwriter('utf-8')(sys.stdout) as fOut:
main(fIn, fOut)
I just had this problem, and Google led me here, so just to add to the general solutions here, this is what worked for me:
# 'value' contains the problematic data
unic = u''
unic += value
value = unic
I had this idea after reading Ned's presentation.
I don't claim to fully understand why this works, though. So if anyone can edit this answer or put in a comment to explain, I'll appreciate it.
If you have something like packet_data = "This is data"
then do this on the next line, right after initializing packet_data
:
unic = u''
packet_data = unic