Are there any standalonenish solutions for normalizing international unicode text to safe ids and filenames in Python?
E.g. turn My International Text: åäö
The way to solve this problem is to make a decision on which characters are allowed (different systems have different rules for valid identifiers.
Once you decide on which characters are allowed, write an allowed() predicate and a dict subclass for use with str.translate:
def makesafe(text, allowed, substitute=None):
''' Remove unallowed characters from text.
If *substitute* is defined, then replace
the character with the given substitute.
'''
class D(dict):
def __getitem__(self, key):
return key if allowed(chr(key)) else substitute
return text.translate(D())
This function is very flexible. It let's you easily specify rules for deciding which text is kept and which text is either replaced or removed.
Here's a simple example using the rule, "only allow characters that are in the unicode category L":
import unicodedata
def allowed(character):
return unicodedata.category(character).startswith('L')
print(makesafe('the*ides&of*march', allowed, '_'))
print(makesafe('the*ides&of*march', allowed))
That code produces safe output as follows:
the_ides_of_march
theidesofmarch
I'll throw my own (partial) solution here too:
import unicodedata
def deaccent(some_unicode_string):
return u''.join(c for c in unicodedata.normalize('NFD', some_unicode_string)
if unicodedata.category(c) != 'Mn')
This does not do all you want, but gives a few nice tricks wrapped up in a convenience method: unicode.normalise('NFD', some_unicode_string)
does a decomposition of unicode characters, for example, it breaks 'ä' to two unicode codepoints U+03B3
and U+0308
.
The other method, unicodedata.category(char)
, returns the enicode character category for that particular char
. Category Mn
contains all combining accents, thus deaccent
removes all accents from the words.
But note, that this is just a partial solution, it gets rid of accents. You still need some sort of whitelist of characters you want to allow after this.
What you want to do is also known as "slugify" a string. Here's a possible solution:
import re
from unicodedata import normalize
_punct_re = re.compile(r'[\t !"#$%&\'()*\-/<=>?@\[\\\]^_`{|},.:]+')
def slugify(text, delim=u'-'):
"""Generates an slightly worse ASCII-only slug."""
result = []
for word in _punct_re.split(text.lower()):
word = normalize('NFKD', word).encode('ascii', 'ignore')
if word:
result.append(word)
return unicode(delim.join(result))
Usage:
>>> slugify(u'My International Text: åäö')
u'my-international-text-aao'
You can also change the delimeter:
>>> slugify(u'My International Text: åäö', delim='_')
u'my_international_text_aao'
Source: Generating Slugs
For Python 3: pastebin.com/ft7Yb3KS (thanks @MrPoxipol).
I'd go with
https://pypi.python.org/pypi?%3Aaction=search&term=slug
Its hard to come up with a scenario where one of these does not fit your needs.
The following will remove accents from whatever characters Unicode can decompose into combining pairs, discard any weird characters it can't, and nuke whitespace:
# encoding: utf-8
from unicodedata import normalize
import re
original = u'ľ š č ť ž ý á í é'
decomposed = normalize("NFKD", original)
no_accent = ''.join(c for c in decomposed if ord(c)<0x7f)
no_spaces = re.sub(r'\s', '_', no_accent)
print no_spaces
# output: l_s_c_t_z_y_a_i_e
It doesn't try to get rid of characters disallowed on filesystems, but you can steal DANGEROUS_CHARS_REGEX
from the file you linked for that.