I'm shopping for an open-source framework for writing natural language grammar rules for pattern matching over annotations. You could think of it like regexps but matching at the token rather than character level. Such a framework should enable the match criteria to reference other attributes attached to the input tokens or spans, as well as modify such attributes in an action.
There are three options I know of which fit this description:
- GATE Java Expressions over Annotations (JAPE)
- Stanford CoreNLP's TokensRegex
- UIMA Ruta (Tutorial)
- Graph Expression (GExp)*
Are there any other options like these available at this time?
Related Tools
- While I know that general parser generators like Antlr can also serve this purpose, I'm looking for something which are more specifically tailored for natural language processing or information extraction.
- UIMA includes a Regex Annotator plugin for declaring rules in XML, but appears to operate at the character rather than high-level objects.
- I know that this kind of task is often performed with statistical models, but for narrow, structured domains there's benefit in hand-crafting rules.
* With GExp 'rules' are actually implemented in code but since there are so few options I chose to include it.
You may also check HTQL. It supports regular expression search of tokens. An example to search for state and zip from US address is:
a=htql.RegEx();
a.setNameSet('states', states);
a.reSearchList(address.split(), r"&[ws:states]<,>?<\d{5}>", case=False)
French academic soft Unitex from University Paris East also matches your description (http://www-igm.univ-mlv.fr/~unitex/)
It's C++ based, comprises many optional preprocessing rules and lexicons for 20+ languages.
The GUI is graph based (you design automata ie 'grammars').
来源:https://stackoverflow.com/questions/17891932/open-source-rule-based-pattern-matching-information-extraction-frameworks