Regex - match a character and all its diacritic variations (aka accent-insensitive)

前端 未结 1 1220
我在风中等你
我在风中等你 2021-01-05 05:41

I am trying to match a character and all its possible diacritic variations (aka accent-insensitive) with a regular expression. What I could do of course is:

         


        
相关标签:
1条回答
  • 2021-01-05 06:28

    A workaround to achieve the desired goal would be to use unidecode to get rid of all diacritics first, and then just match agains the regular e

    re.match(r"^e$", unidecode("é"))
    

    Or in this simplified case

    unidecode("é") == "e"
    

    Another solution which doesn't depend on the unidecode-library, preserves unicode and gives more control is manually removing the diacritics as follows:

    Use unicodedata.normalize() to turn your input string into normal form D (for decomposed), making sure composite characters like é get turned into the decomposite form e\u301 (e + COMBINING ACUTE ACCENT)

    >>> input = "Héllô"
    >>> input
    'Héllô'
    >>> normalized = unicodedata.normalize("NFKD", input)
    >>> normalized
    'He\u0301llo\u0302'
    

    Then, remove all codepoints which fall into the category Mark, Nonspacing (short Mn). Those are all characters that have no width themselves and just decorate the previous character. Use unicodedata.category() to determine the category.

    >>> stripped = "".join(c for c in normalized if unicodedata.category(c) != "Mn")
    >>> stripped
    'Hello'
    

    The result can be used as a source for regex-matching, just as in the unidecode-example above. Here's the whole thing as a function:

    def remove_diacritics(text):
        """
        Returns a string with all diacritics (aka non-spacing marks) removed.
        For example "Héllô" will become "Hello".
        Useful for comparing strings in an accent-insensitive fashion.
        """
        normalized = unicodedata.normalize("NFKD", text)
        return "".join(c for c in normalized if unicodedata.category(c) != "Mn")
    
    0 讨论(0)
提交回复
热议问题