Remove diacritical marks (ń ǹ ň ñ ṅ ņ ṇ ṋ ṉ ̈ ɲ ƞ ᶇ ɳ ȵ) from Unicode chars

后端 未结 12 688
故里飘歌
故里飘歌 2020-11-22 11:42

I am looking at an algorithm that can map between characters with diacritics (tilde, circumflex, caret, umlaut, caron) and their \"simple\" character.

For example:

相关标签:
12条回答
  • 2020-11-22 11:45

    I have done this recently in Java:

    public static final Pattern DIACRITICS_AND_FRIENDS
        = Pattern.compile("[\\p{InCombiningDiacriticalMarks}\\p{IsLm}\\p{IsSk}]+");
    
    private static String stripDiacritics(String str) {
        str = Normalizer.normalize(str, Normalizer.Form.NFD);
        str = DIACRITICS_AND_FRIENDS.matcher(str).replaceAll("");
        return str;
    }
    

    This will do as you specified:

    stripDiacritics("Björn")  = Bjorn
    

    but it will fail on for example Białystok, because the ł character is not diacritic.

    If you want to have a full-blown string simplifier, you will need a second cleanup round, for some more special characters that are not diacritics. Is this map, I have included the most common special characters that appear in our customer names. It is not a complete list, but it will give you the idea how to do extend it. The immutableMap is just a simple class from google-collections.

    public class StringSimplifier {
        public static final char DEFAULT_REPLACE_CHAR = '-';
        public static final String DEFAULT_REPLACE = String.valueOf(DEFAULT_REPLACE_CHAR);
        private static final ImmutableMap<String, String> NONDIACRITICS = ImmutableMap.<String, String>builder()
    
            //Remove crap strings with no sematics
            .put(".", "")
            .put("\"", "")
            .put("'", "")
    
            //Keep relevant characters as seperation
            .put(" ", DEFAULT_REPLACE)
            .put("]", DEFAULT_REPLACE)
            .put("[", DEFAULT_REPLACE)
            .put(")", DEFAULT_REPLACE)
            .put("(", DEFAULT_REPLACE)
            .put("=", DEFAULT_REPLACE)
            .put("!", DEFAULT_REPLACE)
            .put("/", DEFAULT_REPLACE)
            .put("\\", DEFAULT_REPLACE)
            .put("&", DEFAULT_REPLACE)
            .put(",", DEFAULT_REPLACE)
            .put("?", DEFAULT_REPLACE)
            .put("°", DEFAULT_REPLACE) //Remove ?? is diacritic?
            .put("|", DEFAULT_REPLACE)
            .put("<", DEFAULT_REPLACE)
            .put(">", DEFAULT_REPLACE)
            .put(";", DEFAULT_REPLACE)
            .put(":", DEFAULT_REPLACE)
            .put("_", DEFAULT_REPLACE)
            .put("#", DEFAULT_REPLACE)
            .put("~", DEFAULT_REPLACE)
            .put("+", DEFAULT_REPLACE)
            .put("*", DEFAULT_REPLACE)
    
            //Replace non-diacritics as their equivalent characters
            .put("\u0141", "l") // BiaLystock
            .put("\u0142", "l") // Bialystock
            .put("ß", "ss")
            .put("æ", "ae")
            .put("ø", "o")
            .put("©", "c")
            .put("\u00D0", "d") // All Ð ð from http://de.wikipedia.org/wiki/%C3%90
            .put("\u00F0", "d")
            .put("\u0110", "d")
            .put("\u0111", "d")
            .put("\u0189", "d")
            .put("\u0256", "d")
            .put("\u00DE", "th") // thorn Þ
            .put("\u00FE", "th") // thorn þ
            .build();
    
    
        public static String simplifiedString(String orig) {
            String str = orig;
            if (str == null) {
                return null;
            }
            str = stripDiacritics(str);
            str = stripNonDiacritics(str);
            if (str.length() == 0) {
                // Ugly special case to work around non-existing empty strings
                // in Oracle. Store original crapstring as simplified.
                // It would return an empty string if Oracle could store it.
                return orig;
            }
            return str.toLowerCase();
        }
    
        private static String stripNonDiacritics(String orig) {
            StringBuffer ret = new StringBuffer();
            String lastchar = null;
            for (int i = 0; i < orig.length(); i++) {
                String source = orig.substring(i, i + 1);
                String replace = NONDIACRITICS.get(source);
                String toReplace = replace == null ? String.valueOf(source) : replace;
                if (DEFAULT_REPLACE.equals(lastchar) && DEFAULT_REPLACE.equals(toReplace)) {
                    toReplace = "";
                } else {
                    lastchar = toReplace;
                }
                ret.append(toReplace);
            }
            if (ret.length() > 0 && DEFAULT_REPLACE_CHAR == ret.charAt(ret.length() - 1)) {
                ret.deleteCharAt(ret.length() - 1);
            }
            return ret.toString();
        }
    
        /*
        Special regular expression character ranges relevant for simplification -> see http://docstore.mik.ua/orelly/perl/prog3/ch05_04.htm
        InCombiningDiacriticalMarks: special marks that are part of "normal" ä, ö, î etc..
            IsSk: Symbol, Modifier see http://www.fileformat.info/info/unicode/category/Sk/list.htm
            IsLm: Letter, Modifier see http://www.fileformat.info/info/unicode/category/Lm/list.htm
         */
        public static final Pattern DIACRITICS_AND_FRIENDS
            = Pattern.compile("[\\p{InCombiningDiacriticalMarks}\\p{IsLm}\\p{IsSk}]+");
    
    
        private static String stripDiacritics(String str) {
            str = Normalizer.normalize(str, Normalizer.Form.NFD);
            str = DIACRITICS_AND_FRIENDS.matcher(str).replaceAll("");
            return str;
        }
    }
    
    0 讨论(0)
  • 2020-11-22 11:45

    You could use the Normalizer class from java.text:

    System.out.println(new String(Normalizer.normalize("ń ǹ ň ñ ṅ ņ ṇ ṋ", Normalizer.Form.NFKD).getBytes("ascii"), "ascii"));
    

    But there is still some work to do, since Java makes strange things with unconvertable Unicode characters (it does not ignore them, and it does not throw an exception). But I think you could use that as an starting point.

    0 讨论(0)
  • 2020-11-22 11:45

    For future reference, here is a C# extension method that removes accents.

    public static class StringExtensions
    {
        public static string RemoveDiacritics(this string str)
        {
            return new string(
                str.Normalize(NormalizationForm.FormD)
                    .Where(c => CharUnicodeInfo.GetUnicodeCategory(c) != 
                                UnicodeCategory.NonSpacingMark)
                    .ToArray());
        }
    }
    static void Main()
    {
        var input = "ŃŅŇ ÀÁÂÃÄÅ ŢŤţť Ĥĥ àáâãäå ńņň";
        var output = input.RemoveDiacritics();
        Debug.Assert(output == "NNN AAAAAA TTtt Hh aaaaaa nnn");
    }
    
    0 讨论(0)
  • 2020-11-22 11:47

    The core java.text package was designed to address this use case (matching strings without caring about diacritics, case, etc.).

    Configure a Collator to sort on PRIMARY differences in characters. With that, create a CollationKey for each string. If all of your code is in Java, you can use the CollationKey directly. If you need to store the keys in a database or other sort of index, you can convert it to a byte array.

    These classes use the Unicode standard case folding data to determine which characters are equivalent, and support various decomposition strategies.

    Collator c = Collator.getInstance();
    c.setStrength(Collator.PRIMARY);
    Map<CollationKey, String> dictionary = new TreeMap<CollationKey, String>();
    dictionary.put(c.getCollationKey("Björn"), "Björn");
    ...
    CollationKey query = c.getCollationKey("bjorn");
    System.out.println(dictionary.get(query)); // --> "Björn"
    

    Note that collators are locale-specific. This is because "alphabetical order" is differs between locales (and even over time, as has been the case with Spanish). The Collator class relieves you from having to track all of these rules and keep them up to date.

    0 讨论(0)
  • 2020-11-22 11:49

    In case of German it's not wanted to remove diacritics from Umlauts (ä, ö, ü). Instead they are replaced by two letter combination (ae, oe, ue) For instance, Björn should be written as Bjoern (not Bjorn) to have correct pronounciation.

    For that I would have rather a hardcoded mapping, where you can define the replacement rule individually for each special character group.

    0 讨论(0)
  • 2020-11-22 11:50

    Please note that not all of these marks are just "marks" on some "normal" character, that you can remove without changing the meaning.

    In Swedish, å ä and ö are true and proper first-class characters, not some "variant" of some other character. They sound different from all other characters, they sort different, and they make words change meaning ("mätt" and "matt" are two different words).

    0 讨论(0)
提交回复
热议问题