Case-sensitive entity recognition

前端 未结 2 2235
梦毁少年i
梦毁少年i 2021-02-15 17:37

I have keywords that are all stored in lower case, e.g. \"discount nike shoes\", that I am trying to perform entity extraction on. The issue I\'ve run into is that spaCy seems t

相关标签:
2条回答
  • 2021-02-15 18:12

    spaCy's pre-trained statistical models were trained on a large corpus of general news and web text. This means that the entity recognizer has likely only seen very few all-lowercase examples, because that's much less common in those types of texts. In English, capitalisation is also a strong indicator for a named entitiy (unlike German, where all nouns are typically capitalised), so the model probably tends to pay more attention to that.

    If you're working with text that doesn't have proper capitalisation, you probably want to fine-tune the model to be less sensitive here. See the docs on updating the named entity recognizer for more details and code examples.

    Producing the training examples will hopefully not be very difficult, because you can use existing annotations and datasets, or create one using the pre-trained model, and then lowercase everything. For example, you could take text with proper capitalisation, run the model over it and extract all entitiy spans in the text. Next, you lowercase all the texts, and update the model with the new data. Make sure to also mix in text with proper capitalisation, because you don't want the model to learn something like "Everything is lowercase now! Capitalisation doesn't exist anymore!".

    Btw, if you have entities that can be defined using a list or set of rules, you might also want to check out the EntityRuler component. It can be combined with the statistical entity recognizer and will let you pass in a dictionary of exact matches or abstract token patterns that can be case-insensitive. For instance, [{"lower": "nike"}] would match one token whose lowercase form is "nike" – so "NIKE", "Nike", "nike", "NiKe" etc.

    0 讨论(0)
  • 2021-02-15 18:37

    In general, non-standardized casing is problematic for pre-trained models.

    You have a few workarounds:

    • Truecasing: correcting the capitalization in a text so you can use a standard NER model.
    • Caseless models: training NER models that ignore capitalization altogether.
    • Mixed case models: Training NER models on a mix of cased and uncased text.

    I would recommend Truecasing, as there are some decent open-source truecasers out there with good accuracy, and they allow you to then use pre-trained NER solutions such as spaCy.

    Caseless and mixed-case models are more time-consuming to set up and won't necessarily give better results.

    0 讨论(0)
提交回复
热议问题