How to tokenize duplicate characters as one in spaCy?

后端 未结 0 1772
你的背包
你的背包 2020-11-28 00:57

I have code like that:

import spacy
nlp = spacy.load(\'de_core_news_md\')
doc = nlp(\'92637 Weiden i.d.OPf..\')
tokens          


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题