Efficiently querying one string against multiple regexes

前端 未结 18 813
感情败类
感情败类 2020-12-12 17:16

Lets say that I have 10,000 regexes and one string and I want to find out if the string matches any of them and get all the matches. The trivial way to do it would be to jus

相关标签:
18条回答
  • 2020-12-12 17:48

    If you're using real regular expressions (the ones that correspond to regular languages from formal language theory, and not some Perl-like non-regular thing), then you're in luck, because regular languages are closed under union. In most regex languages, pipe (|) is union. So you should be able to construct a string (representing the regular expression you want) as follows:

    (r1)|(r2)|(r3)|...|(r10000)
    

    where parentheses are for grouping, not matching. Anything that matches this regular expression matches at least one of your original regular expressions.

    0 讨论(0)
  • 2020-12-12 17:49

    Martin Sulzmann Has done quite a bit of work in this field. He has a HackageDB project explained breifly here which use partial derivatives seems to be tailor made for this.

    The language used is Haskell and thus will be very hard to translate to a non functional language if that is the desire (I would think translation to many other FP languages would still be quite hard).

    The code is not based on converting to a series of automata and then combining them, instead it is based on symbolic manipulation of the regexes themselves.

    Also the code is very much experimental and Martin is no longer a professor but is in 'gainful employment'(1) so may be uninterested/unable to supply any help or input.


    1. this is a joke - I like professors, the less the smart ones try to work the more chance I have of getting paid!
    0 讨论(0)
  • 2020-12-12 17:50

    I would recommend using Intel's Hyperscan if all you need is to know which regular expressions match. It is built for this purpose. If the actions you need to take are more sophisticated, you can also use ragel. Although it produces a single DFA and can result in many states, and consequently a very large executable program. Hyperscan takes a hybrid NFA/DFA/custom approach to matching that handles large numbers of expressions well.

    0 讨论(0)
  • 2020-12-12 17:51

    I'd almost suggest writing an "inside-out" regex engine - one where the 'target' was the regex, and the 'term' was the string.

    However, it seems that your solution of trying each one iteratively is going to be far easier.

    0 讨论(0)
  • 2020-12-12 17:52

    Aho-Corasick was the answer for me.

    I had 2000 categories of things that each had lists of patterns to match against. String length averaged about 100,000 characters.

    Main Caveat: The patters to match were all language patters not regex patterns e.g. 'cat' vs r'\w+'.

    I was using python and so used https://pypi.python.org/pypi/pyahocorasick/.

    import ahocorasick
    A = ahocorasick.Automaton()
    
    patterns = [
      [['cat','dog'],'mammals'],
      [['bass','tuna','trout'],'fish'],
      [['toad','crocodile'],'amphibians'],
    ]
    
    for row in patterns:
        vals = row[0]
        for val in vals:
            A.add_word(val, (row[1], val))
    
    A.make_automaton()
    
    _string = 'tom loves lions tigers cats and bass'
    
    def test():
      vals = []
      for item in A.iter(_string):
          vals.append(item)
      return vals
    

    Running %timeit test() on my 2000 categories with about 2-3 traces per category and a _string length of about 100,000 got me 2.09 ms vs 631 ms doing sequential re.search() 315x faster!.

    0 讨论(0)
  • 2020-12-12 17:54

    10,000 regexen eh? Eric Wendelin's suggestion of a hierarchy seems to be a good idea. Have you thought of reducing the enormity of these regexen to something like a tree structure?

    As a simple example: All regexen requiring a number could branch off of one regex checking for such, all regexen not requiring one down another branch. In this fashion you could reduce the number of actual comparisons down to a path along the tree instead of doing every single comparison in 10,000.

    This would require decomposing the regexen provided into genres, each genre having a shared test which would rule them out if it fails. In this way you could theoretically reduce the number of actual comparisons dramatically.

    If you had to do this at run time you could parse through your given regular expressions and "file" them into either predefined genres (easiest to do) or comparative genres generated at that moment (not as easy to do).

    Your example of comparing "hello" to "[H|h]ello" and ".{0,20}ello" won't really be helped by this solution. A simple case where this could be useful would be: if you had 1000 tests that would only return true if "ello" exists somewhere in the string and your test string is "goodbye;" you would only have to do the one test on "ello" and know that the 1000 tests requiring it won't work, and because of this, you won't have to do them.

    0 讨论(0)
提交回复
热议问题