Using nlp.pipe() with pre-segmented and pre-tokenized text with spaCy

倾然丶 夕夏残阳落幕 提交于 2019-12-24 19:39:01

问题


I am trying to tag and parse text that has already been split up in sentences and has already been tokenized. As an example:

sents = [['I', 'like', 'cookies', '.'], ['Do', 'you', '?']]

The fastest approach to process batches of text is .pipe(). However, it is not clear to me how I can use that with pre-tokenized, and pre-segmented text. Performance is key here. I tried the following, but that threw an error

docs = [nlp.tokenizer.tokens_from_list(sentence) for sentence in sents]
nlp.tagger(docs)
nlp.parser(docs)

Trace:

Traceback (most recent call last):
  File "C:\Python\Python37\Lib\multiprocessing\pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "C:\Python\projects\PreDicT\predicting-wte\build_id_dictionary.py", line 204, in process_batch
    self.nlp.tagger(docs)
  File "pipes.pyx", line 377, in spacy.pipeline.pipes.Tagger.__call__
  File "pipes.pyx", line 396, in spacy.pipeline.pipes.Tagger.predict
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\model.py", line 169, in __call__
    return self.predict(x)
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\feed_forward.py", line 40, in predict
    X = layer(X)
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\model.py", line 169, in __call__
    return self.predict(x)
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\model.py", line 133, in predict
    y, _ = self.begin_update(X, drop=None)
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\feature_extracter.py", line 14, in begin_update
    features = [self._get_feats(doc) for doc in docs]
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\feature_extracter.py", line 14, in <listcomp>
    features = [self._get_feats(doc) for doc in docs]
  File "C:\Users\bmvroy\.virtualenvs\predicting-wte-YKqW76ba\lib\site-packages\thinc\neural\_classes\feature_extracter.py", line 21, in _get_feats
    arr = doc.doc.to_array(self.attrs)[doc.start : doc.end]
AttributeError: 'list' object has no attribute 'doc'

回答1:


Just replace the default tokenizer in the pipeline with nlp.tokenizer.tokens_from_list instead of calling it separately:

import spacy
nlp = spacy.load('en')
nlp.tokenizer = nlp.tokenizer.tokens_from_list

for doc in nlp.pipe([['I', 'like', 'cookies', '.'], ['Do', 'you', '?']]):
    for token in doc:
        print(token, token.pos_)

Output:

I PRON
like VERB
cookies NOUN
. PUNCT
Do VERB
you PRON
? PUNCT


来源:https://stackoverflow.com/questions/57128766/using-nlp-pipe-with-pre-segmented-and-pre-tokenized-text-with-spacy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!