What is the best algorithm for matching two string containing less than 10 words in latin script

前端 未结 5 1821
刺人心
刺人心 2021-02-04 09:35

I\'m comparing song titles, using Latin script (although not always), my aim is an algorithm that gives a high score if the two song titles seem to be the same same title and a

相关标签:
5条回答
  • 2021-02-04 10:08

    Each algorithm is going to focus on a similar, but slightly different aspect of the two strings. Honestly, it depends entirely on what you are trying to accomplish. You say that the algorithm needs to understand words, but should it also understand interactions between those words? If not, you can just break up each string according to spaces, and compare each word in the first string to each word in the second. If they share a word, the commonality factor of the two strings would need to increase.

    In this way, you could create your own algorithm that focused only on what you were concerned with. If you want to test another algorithm that someone else made, you can find examples online and run your data through to see how accurate the estimated commonality is with each.

    I think http://jtmt.sourceforge.net/ would be a good place to start.

    0 讨论(0)
  • 2021-02-04 10:25

    You are likely need to solve a string-to-string correction problem. Levenshtein distance algorithm is implemented in many languages. Before running it I'd remove all spaces from string, because they don't contain any sensitive information, but may influence two strings difference. For string search prefix trees are also useful, you can have a look in this direction as well. For example here or here. Was already discussed on SO. If spaces are so much significant in your case, just assign a greater weight to them.

    0 讨论(0)
  • 2021-02-04 10:26

    Did you take a look at the levenshtein distance ?

    int org.apache.commons.lang.StringUtils.getLevenshteinDistance(String s, String t)
    

    Find the Levenshtein distance between two Strings.

    This is the number of changes needed to change one String into another, where each change is a single character modification (deletion, insertion or substitution).

    The previous implementation of the Levenshtein distance algorithm was from http://www.merriampark.com/ld.htm

    Chas Emerick has written an implementation in Java, which avoids an OutOfMemoryError which can occur when my Java implementation is used with very large strings. This implementation of the Levenshtein distance algorithm is from http://www.merriampark.com/ldjava.htm

    Anyway, I'm curious to know what do you choose in this case !

    0 讨论(0)
  • 2021-02-04 10:27

    Interesting. Have you thought about a radix sort?

    http://en.wikipedia.org/wiki/Radix_sort

    The concept behind the radix sort is that it is a non-comparative integer sorting algorithm that sorts data with integer keys by grouping keys by the individual digits. If you convert your string into an array of characters, which will be a number no greater than 3 digits, then your k=3(maximum number of digits) and you n = number of string to compare. This will sort the first digits of all your strings. Then you will have another factor s=the length of the longest string. your worst case scenario for sorting would be 3*n*s and the best case would be (3 + n) * s. Check out some radix sort examples for strings here:

    http://algs4.cs.princeton.edu/51radix/LSD.java.html

    http://users.cis.fiu.edu/~weiss/dsaajava3/code/RadixSort.java

    0 讨论(0)
  • 2021-02-04 10:30

    They're all good. They work on different properties of strings and have different matching properties. What works best for you depends on what you need.

    I'm using the JaccardSimilarity to match names. I chose the JaccardSimilarity because it was reasonably fast and for short strings excelled in matching names with common typo's while quickly degrading the score for anything else. Gives extra weight to spaces. It is also insensitive to word order. I needed this behavior because the impact of a false positive was much much higher then that off a false negative, spaces could be typos but not often and word order was not that important.

    Note that this was done in combination with a simplifier that removes non-diacritics and a mapper that maps the remaining characters to the a-z range. This is passed through a normalizes that standardizes all word separator symbols to a single space. Finally the names are parsed to pick out initials, pre- inner- and suffixes. This because names have a structure and format to them that is rather resistant to just string comparison.

    To make your choice you need to make a list of what criteria you want and then look for an algorithm that satisfied those criteria. You can also make a reasonably large test set and run all algorithms on that test set too see what the trade offs are with respect to time, number of positives, false positives, false negatives and negatives, the classes of errors your system should handle, ect, ect.

    If you are still unsure of your choice, you can also setup your system to switch the exact comparison algorithms at run time. This allows you to do an A-B test and see which algorithm works best in practice.

    TLDR; which algorithm you want depends on what you need, if you don't know what you need make sure you can change it later on and run tests on the fly.

    0 讨论(0)
提交回复
热议问题